Aug 13 07:09:12.912001 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:09:12.912040 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:09:12.912056 kernel: BIOS-provided physical RAM map: Aug 13 07:09:12.912065 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:09:12.912074 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 07:09:12.912083 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 07:09:12.912094 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 07:09:12.912104 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 07:09:12.912113 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 07:09:12.912122 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 07:09:12.912136 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 07:09:12.912145 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 13 07:09:12.912158 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 13 07:09:12.912167 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 13 07:09:12.912182 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 07:09:12.912193 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 07:09:12.912207 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 07:09:12.912217 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 07:09:12.912227 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 07:09:12.912237 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:09:12.912247 kernel: NX (Execute Disable) protection: active Aug 13 07:09:12.912257 kernel: APIC: Static calls initialized Aug 13 07:09:12.912267 kernel: efi: EFI v2.7 by EDK II Aug 13 07:09:12.912276 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Aug 13 07:09:12.912286 kernel: SMBIOS 2.8 present. Aug 13 07:09:12.912297 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 07:09:12.912306 kernel: Hypervisor detected: KVM Aug 13 07:09:12.912320 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:09:12.912330 kernel: kvm-clock: using sched offset of 5357481004 cycles Aug 13 07:09:12.912340 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:09:12.912351 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:09:12.912361 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:09:12.912372 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:09:12.912382 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 07:09:12.912392 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:09:12.912415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:09:12.912465 kernel: Using GB pages for direct mapping Aug 13 07:09:12.912477 kernel: Secure boot disabled Aug 13 07:09:12.912487 kernel: ACPI: Early table checksum verification disabled Aug 13 07:09:12.912498 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 07:09:12.912516 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 07:09:12.912533 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:09:12.912544 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:09:12.912559 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 07:09:12.912570 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:09:12.912585 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:09:12.912596 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:09:12.912624 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:09:12.912635 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 07:09:12.912646 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 07:09:12.912661 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 07:09:12.912672 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 07:09:12.912685 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 07:09:12.912698 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 07:09:12.912709 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 07:09:12.912720 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 07:09:12.912731 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 07:09:12.912741 kernel: No NUMA configuration found Aug 13 07:09:12.912756 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 07:09:12.912771 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 07:09:12.912782 kernel: Zone ranges: Aug 13 07:09:12.912793 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:09:12.912804 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 07:09:12.912814 kernel: Normal empty Aug 13 07:09:12.912825 kernel: Movable zone start for each node Aug 13 07:09:12.912836 kernel: Early memory node ranges Aug 13 07:09:12.912847 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:09:12.912858 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 07:09:12.912868 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 07:09:12.912884 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 07:09:12.912894 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 07:09:12.912905 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 07:09:12.912919 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 07:09:12.912930 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:09:12.912941 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:09:12.912952 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 07:09:12.912962 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:09:12.912973 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 07:09:12.912988 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 07:09:12.912998 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 07:09:12.913009 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:09:12.913020 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:09:12.913031 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:09:12.913042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:09:12.913052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:09:12.913063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:09:12.913074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:09:12.913088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:09:12.913099 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:09:12.913110 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:09:12.913121 kernel: TSC deadline timer available Aug 13 07:09:12.913132 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:09:12.913142 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:09:12.913153 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:09:12.913164 kernel: kvm-guest: setup PV sched yield Aug 13 07:09:12.913175 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:09:12.913189 kernel: Booting paravirtualized kernel on KVM Aug 13 07:09:12.913200 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:09:12.913211 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:09:12.913222 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:09:12.913233 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:09:12.913244 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:09:12.913255 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:09:12.913266 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:09:12.913278 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:09:12.913297 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:09:12.913308 kernel: random: crng init done Aug 13 07:09:12.913319 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:09:12.913329 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:09:12.913340 kernel: Fallback order for Node 0: 0 Aug 13 07:09:12.913351 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 07:09:12.913362 kernel: Policy zone: DMA32 Aug 13 07:09:12.913373 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:09:12.913389 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 171124K reserved, 0K cma-reserved) Aug 13 07:09:12.913400 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:09:12.913411 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:09:12.913421 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:09:12.913432 kernel: Dynamic Preempt: voluntary Aug 13 07:09:12.913463 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:09:12.913483 kernel: rcu: RCU event tracing is enabled. Aug 13 07:09:12.913495 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:09:12.913507 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:09:12.913519 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:09:12.913530 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:09:12.913541 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:09:12.913556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:09:12.913568 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:09:12.913583 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:09:12.913594 kernel: Console: colour dummy device 80x25 Aug 13 07:09:12.913620 kernel: printk: console [ttyS0] enabled Aug 13 07:09:12.913635 kernel: ACPI: Core revision 20230628 Aug 13 07:09:12.913646 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:09:12.913658 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:09:12.913669 kernel: x2apic enabled Aug 13 07:09:12.913681 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:09:12.913694 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:09:12.913707 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:09:12.913721 kernel: kvm-guest: setup PV IPIs Aug 13 07:09:12.913732 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:09:12.913747 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:09:12.913758 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:09:12.913770 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:09:12.913781 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:09:12.913793 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:09:12.913804 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:09:12.913815 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:09:12.913827 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:09:12.913838 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:09:12.913853 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:09:12.913864 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:09:12.913876 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:09:12.913887 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:09:12.913903 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:09:12.913915 kernel: x86/bugs: return thunk changed Aug 13 07:09:12.913926 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:09:12.913937 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:09:12.913952 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:09:12.913963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:09:12.913975 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:09:12.913986 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:09:12.913998 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:09:12.914010 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:09:12.914021 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:09:12.914033 kernel: landlock: Up and running. Aug 13 07:09:12.914044 kernel: SELinux: Initializing. Aug 13 07:09:12.914059 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:09:12.914070 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:09:12.914082 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:09:12.914094 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:09:12.914105 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:09:12.914117 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:09:12.914128 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:09:12.914139 kernel: ... version: 0 Aug 13 07:09:12.914150 kernel: ... bit width: 48 Aug 13 07:09:12.914165 kernel: ... generic registers: 6 Aug 13 07:09:12.914176 kernel: ... value mask: 0000ffffffffffff Aug 13 07:09:12.914187 kernel: ... max period: 00007fffffffffff Aug 13 07:09:12.914198 kernel: ... fixed-purpose events: 0 Aug 13 07:09:12.914210 kernel: ... event mask: 000000000000003f Aug 13 07:09:12.914221 kernel: signal: max sigframe size: 1776 Aug 13 07:09:12.914232 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:09:12.914244 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:09:12.914255 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:09:12.914270 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:09:12.914281 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:09:12.914292 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:09:12.914304 kernel: smpboot: Max logical packages: 1 Aug 13 07:09:12.914319 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:09:12.914330 kernel: devtmpfs: initialized Aug 13 07:09:12.914341 kernel: x86/mm: Memory block size: 128MB Aug 13 07:09:12.914353 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 07:09:12.914364 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 07:09:12.914379 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 07:09:12.914391 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 07:09:12.914402 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 07:09:12.914414 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:09:12.914425 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:09:12.914437 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:09:12.914456 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:09:12.914468 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:09:12.914479 kernel: audit: type=2000 audit(1755068952.086:1): state=initialized audit_enabled=0 res=1 Aug 13 07:09:12.914495 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:09:12.914506 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:09:12.914517 kernel: cpuidle: using governor menu Aug 13 07:09:12.914529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:09:12.914540 kernel: dca service started, version 1.12.1 Aug 13 07:09:12.914552 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:09:12.914563 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:09:12.914574 kernel: PCI: Using configuration type 1 for base access Aug 13 07:09:12.914586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:09:12.914620 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:09:12.914632 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:09:12.914643 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:09:12.914655 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:09:12.914666 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:09:12.914678 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:09:12.914689 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:09:12.914701 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:09:12.914712 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:09:12.914727 kernel: ACPI: Interpreter enabled Aug 13 07:09:12.914738 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:09:12.914750 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:09:12.914762 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:09:12.914773 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:09:12.914785 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:09:12.914796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:09:12.915082 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:09:12.915279 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:09:12.915465 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:09:12.915481 kernel: PCI host bridge to bus 0000:00 Aug 13 07:09:12.915704 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:09:12.915871 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:09:12.916032 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:09:12.916193 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:09:12.916362 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:09:12.916536 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 07:09:12.916768 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:09:12.916982 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:09:12.917183 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:09:12.917394 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 07:09:12.917589 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 07:09:12.917808 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 07:09:12.918015 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 07:09:12.918193 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:09:12.918399 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:09:12.918591 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 07:09:12.918799 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 07:09:12.918982 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 07:09:12.919210 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:09:12.919389 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 07:09:12.919578 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 07:09:12.919784 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 07:09:12.919986 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:09:12.920166 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 07:09:12.920351 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 07:09:12.920541 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 07:09:12.920791 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 07:09:12.920991 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:09:12.921167 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:09:12.921364 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:09:12.921552 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 07:09:12.921753 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 07:09:12.921954 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:09:12.922128 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 07:09:12.922144 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:09:12.922156 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:09:12.922168 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:09:12.922179 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:09:12.922197 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:09:12.922209 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:09:12.922221 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:09:12.922232 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:09:12.922243 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:09:12.922255 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:09:12.922267 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:09:12.922278 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:09:12.922290 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:09:12.922305 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:09:12.922317 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:09:12.922328 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:09:12.922340 kernel: iommu: Default domain type: Translated Aug 13 07:09:12.922352 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:09:12.922363 kernel: efivars: Registered efivars operations Aug 13 07:09:12.922375 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:09:12.922386 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:09:12.922398 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 07:09:12.922409 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 07:09:12.922424 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 07:09:12.922436 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 07:09:12.922673 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:09:12.922849 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:09:12.923021 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:09:12.923038 kernel: vgaarb: loaded Aug 13 07:09:12.923049 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:09:12.923061 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:09:12.923078 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:09:12.923090 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:09:12.923102 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:09:12.923113 kernel: pnp: PnP ACPI init Aug 13 07:09:12.923316 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:09:12.923334 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:09:12.923346 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:09:12.923358 kernel: NET: Registered PF_INET protocol family Aug 13 07:09:12.923374 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:09:12.923386 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:09:12.923398 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:09:12.923409 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:09:12.923421 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:09:12.923432 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:09:12.923453 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:09:12.923465 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:09:12.923477 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:09:12.923493 kernel: NET: Registered PF_XDP protocol family Aug 13 07:09:12.923687 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 07:09:12.923867 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 07:09:12.924028 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:09:12.924187 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:09:12.924344 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:09:12.924536 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:09:12.924734 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:09:12.924902 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 07:09:12.924918 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:09:12.924930 kernel: Initialise system trusted keyrings Aug 13 07:09:12.924941 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:09:12.924953 kernel: Key type asymmetric registered Aug 13 07:09:12.924964 kernel: Asymmetric key parser 'x509' registered Aug 13 07:09:12.924975 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:09:12.924987 kernel: io scheduler mq-deadline registered Aug 13 07:09:12.924998 kernel: io scheduler kyber registered Aug 13 07:09:12.925015 kernel: io scheduler bfq registered Aug 13 07:09:12.925027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:09:12.925039 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:09:12.925051 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:09:12.925062 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:09:12.925074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:09:12.925086 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:09:12.925098 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:09:12.925109 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:09:12.925124 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:09:12.925354 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:09:12.925373 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:09:12.925549 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:09:12.925733 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:09:12 UTC (1755068952) Aug 13 07:09:12.925910 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:09:12.925928 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:09:12.925939 kernel: efifb: probing for efifb Aug 13 07:09:12.925957 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 13 07:09:12.925969 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 13 07:09:12.925980 kernel: efifb: scrolling: redraw Aug 13 07:09:12.925991 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 13 07:09:12.926004 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 07:09:12.926015 kernel: fb0: EFI VGA frame buffer device Aug 13 07:09:12.926050 kernel: pstore: Using crash dump compression: deflate Aug 13 07:09:12.926065 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:09:12.926077 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:09:12.926092 kernel: Segment Routing with IPv6 Aug 13 07:09:12.926104 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:09:12.926116 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:09:12.926128 kernel: Key type dns_resolver registered Aug 13 07:09:12.926140 kernel: IPI shorthand broadcast: enabled Aug 13 07:09:12.926152 kernel: sched_clock: Marking stable (1075002786, 111553150)->(1244533576, -57977640) Aug 13 07:09:12.926164 kernel: registered taskstats version 1 Aug 13 07:09:12.926176 kernel: Loading compiled-in X.509 certificates Aug 13 07:09:12.926188 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:09:12.926203 kernel: Key type .fscrypt registered Aug 13 07:09:12.926215 kernel: Key type fscrypt-provisioning registered Aug 13 07:09:12.926227 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:09:12.926239 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:09:12.926251 kernel: ima: No architecture policies found Aug 13 07:09:12.926263 kernel: clk: Disabling unused clocks Aug 13 07:09:12.926275 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:09:12.926287 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:09:12.926303 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:09:12.926315 kernel: Run /init as init process Aug 13 07:09:12.926326 kernel: with arguments: Aug 13 07:09:12.926338 kernel: /init Aug 13 07:09:12.926350 kernel: with environment: Aug 13 07:09:12.926361 kernel: HOME=/ Aug 13 07:09:12.926373 kernel: TERM=linux Aug 13 07:09:12.926385 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:09:12.926400 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:09:12.926419 systemd[1]: Detected virtualization kvm. Aug 13 07:09:12.926432 systemd[1]: Detected architecture x86-64. Aug 13 07:09:12.926454 systemd[1]: Running in initrd. Aug 13 07:09:12.926467 systemd[1]: No hostname configured, using default hostname. Aug 13 07:09:12.926480 systemd[1]: Hostname set to . Aug 13 07:09:12.926497 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:09:12.926510 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:09:12.926522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:09:12.926536 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:09:12.926549 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:09:12.926562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:09:12.926575 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:09:12.926592 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:09:12.926627 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:09:12.926641 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:09:12.926654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:09:12.926666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:09:12.926679 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:09:12.926692 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:09:12.926711 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:09:12.926726 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:09:12.926739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:09:12.926752 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:09:12.926765 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:09:12.926777 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:09:12.926790 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:09:12.926803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:09:12.926816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:09:12.926833 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:09:12.926846 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:09:12.926858 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:09:12.926872 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:09:12.926884 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:09:12.926897 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:09:12.926909 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:09:12.926922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:12.926939 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:09:12.926952 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:09:12.926965 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:09:12.926978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:09:12.927017 systemd-journald[192]: Collecting audit messages is disabled. Aug 13 07:09:12.927051 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:12.927064 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:09:12.927078 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:09:12.927095 systemd-journald[192]: Journal started Aug 13 07:09:12.927121 systemd-journald[192]: Runtime Journal (/run/log/journal/6e6d274779094e03a6de7bd1c5a98ade) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:09:12.937761 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:09:12.910298 systemd-modules-load[193]: Inserted module 'overlay' Aug 13 07:09:12.941106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:09:12.943633 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:09:12.943670 kernel: Bridge firewalling registered Aug 13 07:09:12.943905 systemd-modules-load[193]: Inserted module 'br_netfilter' Aug 13 07:09:12.945765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:09:12.950994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:09:12.953842 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:09:12.959479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:09:12.963976 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:09:12.968136 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:09:12.969440 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:12.977563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:09:12.979576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:09:12.992317 dracut-cmdline[225]: dracut-dracut-053 Aug 13 07:09:12.995511 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:09:13.020705 systemd-resolved[228]: Positive Trust Anchors: Aug 13 07:09:13.020733 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:09:13.020772 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:09:13.024029 systemd-resolved[228]: Defaulting to hostname 'linux'. Aug 13 07:09:13.025599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:09:13.029885 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:09:13.093644 kernel: SCSI subsystem initialized Aug 13 07:09:13.103627 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:09:13.113629 kernel: iscsi: registered transport (tcp) Aug 13 07:09:13.137681 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:09:13.137717 kernel: QLogic iSCSI HBA Driver Aug 13 07:09:13.200690 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:09:13.213833 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:09:13.245639 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:09:13.245705 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:09:13.245717 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:09:13.290642 kernel: raid6: avx2x4 gen() 23704 MB/s Aug 13 07:09:13.307632 kernel: raid6: avx2x2 gen() 30552 MB/s Aug 13 07:09:13.324679 kernel: raid6: avx2x1 gen() 25174 MB/s Aug 13 07:09:13.324701 kernel: raid6: using algorithm avx2x2 gen() 30552 MB/s Aug 13 07:09:13.342684 kernel: raid6: .... xor() 19689 MB/s, rmw enabled Aug 13 07:09:13.342729 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:09:13.363635 kernel: xor: automatically using best checksumming function avx Aug 13 07:09:13.520653 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:09:13.533965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:09:13.542812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:09:13.560320 systemd-udevd[411]: Using default interface naming scheme 'v255'. Aug 13 07:09:13.565372 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:09:13.577756 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:09:13.590997 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Aug 13 07:09:13.626000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:09:13.640810 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:09:13.716111 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:09:13.731935 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:09:13.741243 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:09:13.744668 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:09:13.747661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:09:13.749115 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:09:13.758517 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:09:13.762812 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:09:13.775037 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:09:13.776007 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:09:13.781945 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:09:13.788037 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:09:13.795307 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:09:13.795364 kernel: GPT:9289727 != 19775487 Aug 13 07:09:13.795396 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:09:13.795435 kernel: GPT:9289727 != 19775487 Aug 13 07:09:13.795459 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:09:13.795488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:13.788151 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:09:13.793163 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:09:13.803209 kernel: libata version 3.00 loaded. Aug 13 07:09:13.795293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:13.795570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:13.798404 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:13.805977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:13.813759 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:09:13.813783 kernel: AES CTR mode by8 optimization enabled Aug 13 07:09:13.815804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:13.815945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:13.821689 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:09:13.823625 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:09:13.826740 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:09:13.826963 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:09:13.831347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:13.832839 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Aug 13 07:09:13.836813 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (453) Aug 13 07:09:13.836835 kernel: scsi host0: ahci Aug 13 07:09:13.846652 kernel: scsi host1: ahci Aug 13 07:09:13.847622 kernel: scsi host2: ahci Aug 13 07:09:13.849122 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:09:13.857822 kernel: scsi host3: ahci Aug 13 07:09:13.858037 kernel: scsi host4: ahci Aug 13 07:09:13.858193 kernel: scsi host5: ahci Aug 13 07:09:13.858355 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 07:09:13.858367 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 07:09:13.858377 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 07:09:13.858388 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 07:09:13.858398 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 07:09:13.858409 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 07:09:13.856170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:13.868703 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:09:13.873838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:09:13.877831 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:09:13.878075 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:09:13.892774 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:09:13.893925 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:09:13.904680 disk-uuid[557]: Primary Header is updated. Aug 13 07:09:13.904680 disk-uuid[557]: Secondary Entries is updated. Aug 13 07:09:13.904680 disk-uuid[557]: Secondary Header is updated. Aug 13 07:09:13.908820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:13.912629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:13.917637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:13.917658 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:09:14.164648 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:09:14.164734 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:09:14.166084 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:09:14.166102 kernel: ata3.00: applying bridge limits Aug 13 07:09:14.166983 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:09:14.167629 kernel: ata3.00: configured for UDMA/100 Aug 13 07:09:14.169633 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:09:14.173624 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:09:14.173659 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:09:14.173670 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:09:14.219134 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:09:14.219442 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:09:14.231636 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:09:14.918655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:09:14.919120 disk-uuid[561]: The operation has completed successfully. Aug 13 07:09:14.947780 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:09:14.947899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:09:14.971816 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:09:14.975519 sh[596]: Success Aug 13 07:09:14.989660 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:09:15.024761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:09:15.038262 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:09:15.041251 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:09:15.053197 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:09:15.053245 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:15.053257 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:09:15.054161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:09:15.054869 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:09:15.059193 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:09:15.060469 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:09:15.071753 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:09:15.073884 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:09:15.082661 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:15.082693 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:15.082704 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:09:15.085638 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:09:15.095087 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:09:15.096863 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:15.104467 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:09:15.112814 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:09:15.173125 ignition[684]: Ignition 2.19.0 Aug 13 07:09:15.173138 ignition[684]: Stage: fetch-offline Aug 13 07:09:15.173213 ignition[684]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:15.173228 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:09:15.173344 ignition[684]: parsed url from cmdline: "" Aug 13 07:09:15.173349 ignition[684]: no config URL provided Aug 13 07:09:15.173356 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:09:15.173368 ignition[684]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:09:15.173405 ignition[684]: op(1): [started] loading QEMU firmware config module Aug 13 07:09:15.173411 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:09:15.184146 ignition[684]: op(1): [finished] loading QEMU firmware config module Aug 13 07:09:15.197412 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:09:15.205764 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:09:15.225845 ignition[684]: parsing config with SHA512: 0b2054e39101ec6ff3f111d1d11d4ce87649bfb30f9c6f4858204255f707c7443e6133b15fd317b395950ce2d08e9945ab484813f2206bd313e672575ddbfb6e Aug 13 07:09:15.228963 systemd-networkd[784]: lo: Link UP Aug 13 07:09:15.228971 systemd-networkd[784]: lo: Gained carrier Aug 13 07:09:15.230174 unknown[684]: fetched base config from "system" Aug 13 07:09:15.230187 unknown[684]: fetched user config from "qemu" Aug 13 07:09:15.232947 ignition[684]: fetch-offline: fetch-offline passed Aug 13 07:09:15.230740 systemd-networkd[784]: Enumeration completed Aug 13 07:09:15.233063 ignition[684]: Ignition finished successfully Aug 13 07:09:15.230816 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:09:15.231740 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:09:15.231744 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:09:15.232192 systemd[1]: Reached target network.target - Network. Aug 13 07:09:15.232566 systemd-networkd[784]: eth0: Link UP Aug 13 07:09:15.232571 systemd-networkd[784]: eth0: Gained carrier Aug 13 07:09:15.232577 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:09:15.235430 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:09:15.237034 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:09:15.242668 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:09:15.242834 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:09:15.258679 ignition[787]: Ignition 2.19.0 Aug 13 07:09:15.258690 ignition[787]: Stage: kargs Aug 13 07:09:15.258910 ignition[787]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:15.258922 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:09:15.259773 ignition[787]: kargs: kargs passed Aug 13 07:09:15.263018 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:09:15.259820 ignition[787]: Ignition finished successfully Aug 13 07:09:15.275896 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:09:15.288030 ignition[798]: Ignition 2.19.0 Aug 13 07:09:15.288041 ignition[798]: Stage: disks Aug 13 07:09:15.288253 ignition[798]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:15.288266 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:09:15.289232 ignition[798]: disks: disks passed Aug 13 07:09:15.291576 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:09:15.289277 ignition[798]: Ignition finished successfully Aug 13 07:09:15.292943 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:09:15.294359 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:09:15.296444 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:09:15.297425 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:09:15.297805 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:09:15.307752 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:09:15.319570 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:09:15.325886 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:09:15.338727 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:09:15.426636 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:09:15.426799 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:09:15.427668 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:09:15.436690 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:09:15.438475 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:09:15.439777 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:09:15.439833 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:09:15.446729 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Aug 13 07:09:15.439864 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:09:15.451595 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:15.451624 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:15.451635 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:09:15.447118 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:09:15.453620 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:09:15.460766 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:09:15.462396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:09:15.498942 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:09:15.504722 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:09:15.510129 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:09:15.515578 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:09:15.613866 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:09:15.624812 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:09:15.626328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:09:15.637673 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:15.651345 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:09:15.665350 ignition[929]: INFO : Ignition 2.19.0 Aug 13 07:09:15.665350 ignition[929]: INFO : Stage: mount Aug 13 07:09:15.667543 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:15.667543 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:09:15.667543 ignition[929]: INFO : mount: mount passed Aug 13 07:09:15.667543 ignition[929]: INFO : Ignition finished successfully Aug 13 07:09:15.669094 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:09:15.681791 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:09:16.052843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:09:16.065809 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:09:16.073627 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Aug 13 07:09:16.075901 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:09:16.075917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:09:16.075927 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:09:16.079630 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:09:16.080564 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:09:16.200502 ignition[957]: INFO : Ignition 2.19.0 Aug 13 07:09:16.200502 ignition[957]: INFO : Stage: files Aug 13 07:09:16.202843 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:16.202843 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:09:16.202843 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:09:16.207318 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:09:16.207318 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:09:16.212419 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:09:16.213968 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:09:16.215801 unknown[957]: wrote ssh authorized keys file for user: core Aug 13 07:09:16.217114 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:09:16.219715 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:09:16.221792 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:09:16.270888 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:09:16.428953 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:09:16.428953 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:09:16.432662 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:09:16.434302 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:09:16.436091 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:09:16.437757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:09:16.439560 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:09:16.441377 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:09:16.443463 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:09:16.446036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:09:16.448043 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:09:16.449913 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:16.452539 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:16.455036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:16.457320 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:09:16.901239 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:09:17.278255 systemd-networkd[784]: eth0: Gained IPv6LL Aug 13 07:09:17.512379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:09:17.512379 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 07:09:17.516619 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:09:17.542319 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:09:17.548421 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:09:17.550091 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:09:17.550091 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:09:17.550091 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:09:17.550091 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:09:17.550091 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:09:17.550091 ignition[957]: INFO : files: files passed Aug 13 07:09:17.550091 ignition[957]: INFO : Ignition finished successfully Aug 13 07:09:17.552209 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:09:17.560929 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:09:17.562587 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:09:17.568031 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:09:17.568174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:09:17.573197 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:09:17.575870 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:09:17.575870 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:09:17.580579 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:09:17.579202 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:09:17.580822 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:09:17.593758 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:09:17.617782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:09:17.617932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:09:17.620104 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:09:17.622070 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:09:17.624027 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:09:17.624849 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:09:17.645389 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:09:17.663767 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:09:17.674461 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:09:17.674980 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:09:17.675301 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:09:17.675623 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:09:17.675765 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:09:17.676390 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:09:17.676888 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:09:17.677194 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:09:17.677517 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:09:17.677993 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:09:17.715808 ignition[1012]: INFO : Ignition 2.19.0 Aug 13 07:09:17.715808 ignition[1012]: INFO : Stage: umount Aug 13 07:09:17.715808 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:09:17.715808 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:09:17.715808 ignition[1012]: INFO : umount: umount passed Aug 13 07:09:17.715808 ignition[1012]: INFO : Ignition finished successfully Aug 13 07:09:17.678311 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:09:17.678644 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:09:17.678950 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:09:17.679260 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:09:17.679579 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:09:17.679851 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:09:17.680006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:09:17.680678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:09:17.680997 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:09:17.681267 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:09:17.681442 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:09:17.681925 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:09:17.682054 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:09:17.682710 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:09:17.682849 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:09:17.683277 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:09:17.683686 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:09:17.687668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:09:17.687980 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:09:17.688275 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:09:17.688596 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:09:17.688730 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:09:17.689094 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:09:17.689205 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:09:17.689567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:09:17.689728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:09:17.690040 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:09:17.690164 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:09:17.691290 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:09:17.692234 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:09:17.692482 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:09:17.692657 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:09:17.693064 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:09:17.693213 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:09:17.697046 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:09:17.697184 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:09:17.717233 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:09:17.717410 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:09:17.720146 systemd[1]: Stopped target network.target - Network. Aug 13 07:09:17.721370 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:09:17.721442 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:09:17.723240 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:09:17.723319 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:09:17.724956 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:09:17.725012 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:09:17.726850 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:09:17.726905 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:09:17.729173 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:09:17.731083 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:09:17.733953 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:09:17.736690 systemd-networkd[784]: eth0: DHCPv6 lease lost Aug 13 07:09:17.738364 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:09:17.738510 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:09:17.742527 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:09:17.742688 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:09:17.745372 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:09:17.745426 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:09:17.752770 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:09:17.753095 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:09:17.753167 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:09:17.753499 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:09:17.753560 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:17.757238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:09:17.757292 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:09:17.757531 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:09:17.757577 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:09:17.758123 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:09:17.771953 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:09:17.772090 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:09:17.809351 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:09:17.809585 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:09:17.810284 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:09:17.810354 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:09:17.812974 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:09:17.813021 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:09:17.813234 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:09:17.813288 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:09:17.814011 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:09:17.814069 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:09:17.814802 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:09:17.814859 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:09:17.918949 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:09:17.921185 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:09:17.921272 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:09:17.921843 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:09:17.921920 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:09:17.925072 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:09:17.925137 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:09:17.925369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:17.925423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:17.943281 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:09:17.943452 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:09:17.944308 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:09:17.944367 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:09:17.954739 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:09:17.954866 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:09:17.955463 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:09:17.979772 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:09:17.989017 systemd[1]: Switching root. Aug 13 07:09:18.017747 systemd-journald[192]: Journal stopped Aug 13 07:09:19.395713 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Aug 13 07:09:19.395826 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:09:19.395860 kernel: SELinux: policy capability open_perms=1 Aug 13 07:09:19.395877 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:09:19.395891 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:09:19.395905 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:09:19.395920 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:09:19.395936 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:09:19.395951 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:09:19.395967 kernel: audit: type=1403 audit(1755068958.579:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:09:19.395985 systemd[1]: Successfully loaded SELinux policy in 47.978ms. Aug 13 07:09:19.396034 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.361ms. Aug 13 07:09:19.396052 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:09:19.396066 systemd[1]: Detected virtualization kvm. Aug 13 07:09:19.396078 systemd[1]: Detected architecture x86-64. Aug 13 07:09:19.396090 systemd[1]: Detected first boot. Aug 13 07:09:19.396102 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:09:19.396115 zram_generator::config[1059]: No configuration found. Aug 13 07:09:19.396134 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:09:19.396158 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:09:19.396171 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:09:19.396184 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:09:19.396213 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:09:19.396226 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:09:19.396238 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:09:19.396259 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:09:19.396272 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:09:19.396285 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:09:19.396305 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:09:19.396317 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:09:19.396329 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:09:19.396341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:09:19.396354 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:09:19.396366 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:09:19.396379 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:09:19.396391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:09:19.396403 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:09:19.396420 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:09:19.396432 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:09:19.396444 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:09:19.396457 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:09:19.396469 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:09:19.396481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:09:19.396497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:09:19.396509 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:09:19.396527 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:09:19.396540 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:09:19.396552 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:09:19.396564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:09:19.396576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:09:19.396588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:09:19.396617 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:09:19.396631 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:09:19.396643 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:09:19.396661 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:09:19.396673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:19.396685 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:09:19.396697 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:09:19.396709 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:09:19.396721 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:09:19.396733 systemd[1]: Reached target machines.target - Containers. Aug 13 07:09:19.396745 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:09:19.396763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:19.396776 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:09:19.396792 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:09:19.396804 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:19.396816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:09:19.396828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:19.396840 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:09:19.396852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:19.396864 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:09:19.396882 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:09:19.396894 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:09:19.396906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:09:19.396918 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:09:19.396930 kernel: fuse: init (API version 7.39) Aug 13 07:09:19.396944 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:09:19.396958 kernel: loop: module loaded Aug 13 07:09:19.396970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:09:19.396982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:09:19.397001 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:09:19.397013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:09:19.397026 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:09:19.397038 systemd[1]: Stopped verity-setup.service. Aug 13 07:09:19.397051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:19.397091 systemd-journald[1129]: Collecting audit messages is disabled. Aug 13 07:09:19.397123 kernel: ACPI: bus type drm_connector registered Aug 13 07:09:19.397135 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:09:19.397148 systemd-journald[1129]: Journal started Aug 13 07:09:19.397170 systemd-journald[1129]: Runtime Journal (/run/log/journal/6e6d274779094e03a6de7bd1c5a98ade) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:09:19.149741 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:09:19.170019 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:09:19.170505 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:09:19.408806 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:09:19.410652 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:09:19.412226 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:09:19.413519 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:09:19.415156 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:09:19.416677 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:09:19.418097 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:09:19.419810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:09:19.421566 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:09:19.421792 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:09:19.423309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:19.423507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:19.424966 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:09:19.425155 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:09:19.426521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:19.426719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:19.428269 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:09:19.428449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:09:19.429849 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:19.430031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:19.431521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:09:19.432962 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:09:19.434511 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:09:19.453075 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:09:19.464741 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:09:19.467200 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:09:19.468300 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:09:19.468334 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:09:19.470372 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:09:19.473885 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:09:19.476538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:09:19.477705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:19.480897 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:09:19.483418 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:09:19.484693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:19.489436 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:09:19.491838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:09:19.493472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:09:19.502577 systemd-journald[1129]: Time spent on flushing to /var/log/journal/6e6d274779094e03a6de7bd1c5a98ade is 28.982ms for 993 entries. Aug 13 07:09:19.502577 systemd-journald[1129]: System Journal (/var/log/journal/6e6d274779094e03a6de7bd1c5a98ade) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:09:19.538825 systemd-journald[1129]: Received client request to flush runtime journal. Aug 13 07:09:19.509419 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:09:19.514784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:09:19.518587 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:09:19.521983 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:09:19.523497 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:09:19.525162 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:09:19.535444 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:09:19.540713 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 07:09:19.548814 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:09:19.551446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:09:19.553416 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:09:19.562468 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 13 07:09:19.562488 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 13 07:09:19.564832 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:09:19.571102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:19.573220 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:09:19.576641 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:09:19.581826 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:09:19.584949 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:09:19.586554 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:09:19.589269 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:09:19.602659 kernel: loop1: detected capacity change from 0 to 140768 Aug 13 07:09:19.619122 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:09:19.625778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:09:19.644643 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:09:19.646232 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Aug 13 07:09:19.646264 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Aug 13 07:09:19.654554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:09:19.691634 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 07:09:19.701651 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:09:19.711635 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:09:19.723300 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:09:19.723956 (sd-merge)[1203]: Merged extensions into '/usr'. Aug 13 07:09:19.728764 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:09:19.728869 systemd[1]: Reloading... Aug 13 07:09:19.805641 zram_generator::config[1229]: No configuration found. Aug 13 07:09:20.342815 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:09:20.422008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:20.480215 systemd[1]: Reloading finished in 750 ms. Aug 13 07:09:20.519937 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:09:20.521853 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:09:20.536929 systemd[1]: Starting ensure-sysext.service... Aug 13 07:09:20.539249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:09:20.549502 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:09:20.549521 systemd[1]: Reloading... Aug 13 07:09:20.577438 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:09:20.577914 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:09:20.579007 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:09:20.579331 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Aug 13 07:09:20.579417 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Aug 13 07:09:20.583457 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:09:20.583472 systemd-tmpfiles[1267]: Skipping /boot Aug 13 07:09:20.615721 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:09:20.615750 systemd-tmpfiles[1267]: Skipping /boot Aug 13 07:09:20.676655 zram_generator::config[1294]: No configuration found. Aug 13 07:09:20.823347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:20.878951 systemd[1]: Reloading finished in 328 ms. Aug 13 07:09:20.900338 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:09:20.918677 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:09:20.928794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:20.931822 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:09:20.934301 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:09:20.937958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:09:20.944724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:09:20.948288 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:09:20.952223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:20.952400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:20.953699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:20.958884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:20.963857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:20.965278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:20.971873 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:09:20.973575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:20.974755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:20.975169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:20.981134 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:20.982372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:20.983807 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Aug 13 07:09:20.987980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:20.988425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:20.995360 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:09:20.999392 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:09:21.007145 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:21.007443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:21.017886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:21.024693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:21.026314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:21.026531 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:21.028930 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:09:21.030277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:21.033429 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:09:21.035217 augenrules[1368]: No rules Aug 13 07:09:21.035534 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:09:21.037896 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:21.048115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:21.048344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:21.050162 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:21.050402 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:21.066983 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:21.067196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:21.074833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:21.077548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:09:21.083779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:21.087974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:21.089383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:21.093529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:09:21.094726 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:21.095477 systemd[1]: Finished ensure-sysext.service. Aug 13 07:09:21.098184 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:09:21.100083 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:09:21.102898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:21.107009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:21.108757 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:09:21.108947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:09:21.122937 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:21.123191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:21.131903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:21.132102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:21.133628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1390) Aug 13 07:09:21.139728 systemd-resolved[1337]: Positive Trust Anchors: Aug 13 07:09:21.139753 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:09:21.139785 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:09:21.145785 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:09:21.149338 systemd-resolved[1337]: Defaulting to hostname 'linux'. Aug 13 07:09:21.153196 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:09:21.156459 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:09:21.157789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:09:21.165833 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:09:21.167007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:21.167084 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:09:21.170914 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:09:21.172007 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:09:21.190023 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:09:21.208640 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:09:21.218857 systemd-networkd[1402]: lo: Link UP Aug 13 07:09:21.218873 systemd-networkd[1402]: lo: Gained carrier Aug 13 07:09:21.221354 systemd-networkd[1402]: Enumeration completed Aug 13 07:09:21.221482 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:09:21.222800 systemd[1]: Reached target network.target - Network. Aug 13 07:09:21.223940 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:09:21.223957 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:09:21.224982 systemd-networkd[1402]: eth0: Link UP Aug 13 07:09:21.224995 systemd-networkd[1402]: eth0: Gained carrier Aug 13 07:09:21.225009 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:09:21.233840 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:09:21.239643 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:09:21.242317 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 07:09:21.249045 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:09:21.249237 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:09:21.249431 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:09:21.242688 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:09:21.252422 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:09:21.267307 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:09:21.885129 systemd-timesyncd[1417]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:09:21.885207 systemd-resolved[1337]: Clock change detected. Flushing caches. Aug 13 07:09:21.885518 systemd-timesyncd[1417]: Initial clock synchronization to Wed 2025-08-13 07:09:21.884908 UTC. Aug 13 07:09:21.885602 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:09:21.904000 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:09:21.902063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:21.928412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:21.928758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:22.057071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:22.083111 kernel: kvm_amd: TSC scaling supported Aug 13 07:09:22.083186 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:09:22.083230 kernel: kvm_amd: Nested Paging enabled Aug 13 07:09:22.083243 kernel: kvm_amd: LBR virtualization supported Aug 13 07:09:22.084129 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:09:22.084146 kernel: kvm_amd: Virtual GIF supported Aug 13 07:09:22.110335 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:09:22.183550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:22.190518 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:09:22.202285 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:09:22.211356 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:09:22.248737 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:09:22.250325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:09:22.251436 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:09:22.252628 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:09:22.253874 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:09:22.255335 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:09:22.256546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:09:22.257997 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:09:22.259214 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:09:22.259246 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:09:22.260139 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:09:22.262162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:09:22.265147 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:09:22.277832 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:09:22.280285 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:09:22.281839 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:09:22.282999 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:09:22.283930 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:09:22.284892 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:09:22.284920 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:09:22.286024 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:09:22.288145 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:09:22.291309 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:09:22.292102 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:09:22.295788 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:09:22.297392 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:09:22.300860 jq[1447]: false Aug 13 07:09:22.301211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:09:22.303139 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:09:22.310191 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:09:22.315140 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:09:22.319322 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:09:22.320820 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:09:22.321379 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:09:22.322106 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:09:22.327019 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:09:22.330052 extend-filesystems[1448]: Found loop3 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found loop4 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found loop5 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found sr0 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda1 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda2 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda3 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found usr Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda4 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda6 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda7 Aug 13 07:09:22.330052 extend-filesystems[1448]: Found vda9 Aug 13 07:09:22.330052 extend-filesystems[1448]: Checking size of /dev/vda9 Aug 13 07:09:22.329860 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:09:22.328999 dbus-daemon[1446]: [system] SELinux support is enabled Aug 13 07:09:22.377023 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:09:22.377070 extend-filesystems[1448]: Resized partition /dev/vda9 Aug 13 07:09:22.335833 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:09:22.383303 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:09:22.388578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1389) Aug 13 07:09:22.339329 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:09:22.388681 jq[1461]: true Aug 13 07:09:22.339774 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:09:22.388883 update_engine[1460]: I20250813 07:09:22.379001 1460 main.cc:92] Flatcar Update Engine starting Aug 13 07:09:22.388883 update_engine[1460]: I20250813 07:09:22.384667 1460 update_check_scheduler.cc:74] Next update check in 6m14s Aug 13 07:09:22.342344 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:09:22.342601 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:09:22.396304 tar[1467]: linux-amd64/helm Aug 13 07:09:22.346733 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:09:22.397078 jq[1469]: true Aug 13 07:09:22.347692 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:09:22.363814 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:09:22.363845 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:09:22.363855 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:09:22.370423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:09:22.370443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:09:22.385230 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:09:22.389158 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:09:22.397636 systemd-logind[1459]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:09:22.470074 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:09:22.470118 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:09:22.470118 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:09:22.470118 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:09:22.397658 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:09:22.486584 extend-filesystems[1448]: Resized filesystem in /dev/vda9 Aug 13 07:09:22.402845 systemd-logind[1459]: New seat seat0. Aug 13 07:09:22.468058 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:09:22.471711 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:09:22.471928 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:09:22.506994 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:09:22.507348 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:09:22.507800 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:09:22.510259 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:09:22.634776 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:09:22.670904 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:09:22.823144 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:09:22.831674 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:09:22.831930 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:09:22.835222 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:09:22.898799 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:09:22.910405 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:09:22.915086 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:09:22.916479 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:09:22.938642 containerd[1471]: time="2025-08-13T07:09:22.938511373Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:09:23.010910 containerd[1471]: time="2025-08-13T07:09:23.010736299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:23.014813 containerd[1471]: time="2025-08-13T07:09:23.014729535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:23.014813 containerd[1471]: time="2025-08-13T07:09:23.014795789Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:09:23.014993 containerd[1471]: time="2025-08-13T07:09:23.014828941Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:09:23.015452 containerd[1471]: time="2025-08-13T07:09:23.015396595Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:09:23.015452 containerd[1471]: time="2025-08-13T07:09:23.015437602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:23.015674 containerd[1471]: time="2025-08-13T07:09:23.015627258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:23.015674 containerd[1471]: time="2025-08-13T07:09:23.015669397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:23.016127 containerd[1471]: time="2025-08-13T07:09:23.016088022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:23.016177 containerd[1471]: time="2025-08-13T07:09:23.016129710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:23.016177 containerd[1471]: time="2025-08-13T07:09:23.016147213Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:23.016177 containerd[1471]: time="2025-08-13T07:09:23.016157101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:23.016311 containerd[1471]: time="2025-08-13T07:09:23.016276204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:23.016884 containerd[1471]: time="2025-08-13T07:09:23.016821927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:23.017152 containerd[1471]: time="2025-08-13T07:09:23.017088197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:23.017152 containerd[1471]: time="2025-08-13T07:09:23.017139242Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:09:23.017404 containerd[1471]: time="2025-08-13T07:09:23.017345379Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:09:23.017486 containerd[1471]: time="2025-08-13T07:09:23.017464552Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:09:23.023483 containerd[1471]: time="2025-08-13T07:09:23.023426709Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:09:23.023554 containerd[1471]: time="2025-08-13T07:09:23.023523801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:09:23.023578 containerd[1471]: time="2025-08-13T07:09:23.023569447Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:09:23.023599 containerd[1471]: time="2025-08-13T07:09:23.023590576Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:09:23.023680 containerd[1471]: time="2025-08-13T07:09:23.023635370Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:09:23.024083 containerd[1471]: time="2025-08-13T07:09:23.023879107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:09:23.023894 systemd-networkd[1402]: eth0: Gained IPv6LL Aug 13 07:09:23.024406 containerd[1471]: time="2025-08-13T07:09:23.024340652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024647097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024672334Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024689045Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024708772Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024728740Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024747144Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024762553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024778563Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024795836Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024811906Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024828156Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024858042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024872369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026165 containerd[1471]: time="2025-08-13T07:09:23.024888499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.024905281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.024922443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.024955755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.024972056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025003495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025016549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025031668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025042728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025073276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025091279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025122769Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025148908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025163926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026449 containerd[1471]: time="2025-08-13T07:09:23.025178383Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025253824Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025281636Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025293128Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025307916Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025326520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025348391Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025363009Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:09:23.026718 containerd[1471]: time="2025-08-13T07:09:23.025376564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:09:23.027074 containerd[1471]: time="2025-08-13T07:09:23.025772587Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:09:23.027074 containerd[1471]: time="2025-08-13T07:09:23.025829293Z" level=info msg="Connect containerd service" Aug 13 07:09:23.027074 containerd[1471]: time="2025-08-13T07:09:23.025899705Z" level=info msg="using legacy CRI server" Aug 13 07:09:23.027074 containerd[1471]: time="2025-08-13T07:09:23.025910495Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:09:23.027074 containerd[1471]: time="2025-08-13T07:09:23.026140086Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:09:23.027369 containerd[1471]: time="2025-08-13T07:09:23.027131695Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027366585Z" level=info msg="Start subscribing containerd event" Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027455942Z" level=info msg="Start recovering state" Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027637122Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027641059Z" level=info msg="Start event monitor" Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027703266Z" level=info msg="Start snapshots syncer" Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027723464Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027733072Z" level=info msg="Start streaming server" Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.027704398Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:09:23.028467 containerd[1471]: time="2025-08-13T07:09:23.028348966Z" level=info msg="containerd successfully booted in 0.093561s" Aug 13 07:09:23.028015 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:09:23.030765 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:09:23.034245 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:09:23.049923 tar[1467]: linux-amd64/LICENSE Aug 13 07:09:23.050597 tar[1467]: linux-amd64/README.md Aug 13 07:09:23.050405 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:09:23.053060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:23.055215 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:09:23.075200 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:09:23.085441 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:09:23.087092 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:09:23.087336 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:09:23.089849 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:09:24.613886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:24.615624 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:09:24.616948 systemd[1]: Startup finished in 1.209s (kernel) + 5.858s (initrd) + 5.466s (userspace) = 12.534s. Aug 13 07:09:24.621398 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:25.549759 kubelet[1559]: E0813 07:09:25.549682 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:25.555052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:25.555313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:25.555802 systemd[1]: kubelet.service: Consumed 2.342s CPU time. Aug 13 07:09:26.823456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:09:26.824736 systemd[1]: Started sshd@0-10.0.0.75:22-10.0.0.1:38840.service - OpenSSH per-connection server daemon (10.0.0.1:38840). Aug 13 07:09:26.872305 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 38840 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:09:26.874469 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:26.883568 systemd-logind[1459]: New session 1 of user core. Aug 13 07:09:26.884875 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:09:26.898194 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:09:26.911297 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:09:26.914311 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:09:26.923091 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:09:27.036483 systemd[1577]: Queued start job for default target default.target. Aug 13 07:09:27.045315 systemd[1577]: Created slice app.slice - User Application Slice. Aug 13 07:09:27.045341 systemd[1577]: Reached target paths.target - Paths. Aug 13 07:09:27.045354 systemd[1577]: Reached target timers.target - Timers. Aug 13 07:09:27.046910 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:09:27.058841 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:09:27.058970 systemd[1577]: Reached target sockets.target - Sockets. Aug 13 07:09:27.059010 systemd[1577]: Reached target basic.target - Basic System. Aug 13 07:09:27.059046 systemd[1577]: Reached target default.target - Main User Target. Aug 13 07:09:27.059082 systemd[1577]: Startup finished in 128ms. Aug 13 07:09:27.059414 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:09:27.061057 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:09:27.126159 systemd[1]: Started sshd@1-10.0.0.75:22-10.0.0.1:38850.service - OpenSSH per-connection server daemon (10.0.0.1:38850). Aug 13 07:09:27.162623 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 38850 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:09:27.164465 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:27.168879 systemd-logind[1459]: New session 2 of user core. Aug 13 07:09:27.178134 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:09:27.233549 sshd[1588]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:27.241565 systemd[1]: sshd@1-10.0.0.75:22-10.0.0.1:38850.service: Deactivated successfully. Aug 13 07:09:27.243185 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:09:27.244493 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:09:27.252204 systemd[1]: Started sshd@2-10.0.0.75:22-10.0.0.1:38852.service - OpenSSH per-connection server daemon (10.0.0.1:38852). Aug 13 07:09:27.253102 systemd-logind[1459]: Removed session 2. Aug 13 07:09:27.282581 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 38852 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:09:27.284060 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:27.288700 systemd-logind[1459]: New session 3 of user core. Aug 13 07:09:27.308138 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:09:27.358782 sshd[1595]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:27.379389 systemd[1]: sshd@2-10.0.0.75:22-10.0.0.1:38852.service: Deactivated successfully. Aug 13 07:09:27.382185 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:09:27.384344 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:09:27.390224 systemd[1]: Started sshd@3-10.0.0.75:22-10.0.0.1:38856.service - OpenSSH per-connection server daemon (10.0.0.1:38856). Aug 13 07:09:27.391231 systemd-logind[1459]: Removed session 3. Aug 13 07:09:27.422045 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 38856 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:09:27.423930 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:27.428231 systemd-logind[1459]: New session 4 of user core. Aug 13 07:09:27.438109 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:09:27.493592 sshd[1602]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:27.513002 systemd[1]: sshd@3-10.0.0.75:22-10.0.0.1:38856.service: Deactivated successfully. Aug 13 07:09:27.514825 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:09:27.516642 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:09:27.518091 systemd[1]: Started sshd@4-10.0.0.75:22-10.0.0.1:38868.service - OpenSSH per-connection server daemon (10.0.0.1:38868). Aug 13 07:09:27.519149 systemd-logind[1459]: Removed session 4. Aug 13 07:09:27.555438 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 38868 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:09:27.557038 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:27.561001 systemd-logind[1459]: New session 5 of user core. Aug 13 07:09:27.570094 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:09:27.627862 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:09:27.628233 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:27.644625 sudo[1612]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:27.646465 sshd[1609]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:27.658834 systemd[1]: sshd@4-10.0.0.75:22-10.0.0.1:38868.service: Deactivated successfully. Aug 13 07:09:27.660762 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:09:27.662132 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:09:27.663548 systemd[1]: Started sshd@5-10.0.0.75:22-10.0.0.1:38878.service - OpenSSH per-connection server daemon (10.0.0.1:38878). Aug 13 07:09:27.664366 systemd-logind[1459]: Removed session 5. Aug 13 07:09:27.700282 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 38878 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:09:27.702001 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:27.705673 systemd-logind[1459]: New session 6 of user core. Aug 13 07:09:27.715096 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:09:27.769371 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:09:27.769737 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:27.774155 sudo[1621]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:27.780500 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:09:27.780836 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:27.801229 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:27.802936 auditctl[1624]: No rules Aug 13 07:09:27.804223 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:09:27.804510 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:27.806519 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:27.837856 augenrules[1642]: No rules Aug 13 07:09:27.839692 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:27.841115 sudo[1620]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:27.843129 sshd[1617]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:27.853882 systemd[1]: sshd@5-10.0.0.75:22-10.0.0.1:38878.service: Deactivated successfully. Aug 13 07:09:27.855597 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:09:27.857466 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:09:27.867251 systemd[1]: Started sshd@6-10.0.0.75:22-10.0.0.1:38882.service - OpenSSH per-connection server daemon (10.0.0.1:38882). Aug 13 07:09:27.868343 systemd-logind[1459]: Removed session 6. Aug 13 07:09:27.898323 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 38882 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:09:27.899944 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:27.903631 systemd-logind[1459]: New session 7 of user core. Aug 13 07:09:27.919105 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:09:27.972870 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:09:27.973243 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:28.679209 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:09:28.679336 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:09:29.403334 dockerd[1671]: time="2025-08-13T07:09:29.403209734Z" level=info msg="Starting up" Aug 13 07:09:29.903962 dockerd[1671]: time="2025-08-13T07:09:29.903852360Z" level=info msg="Loading containers: start." Aug 13 07:09:30.043014 kernel: Initializing XFRM netlink socket Aug 13 07:09:30.127335 systemd-networkd[1402]: docker0: Link UP Aug 13 07:09:30.150573 dockerd[1671]: time="2025-08-13T07:09:30.150505637Z" level=info msg="Loading containers: done." Aug 13 07:09:30.171843 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck196974840-merged.mount: Deactivated successfully. Aug 13 07:09:30.174386 dockerd[1671]: time="2025-08-13T07:09:30.174333605Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:09:30.174509 dockerd[1671]: time="2025-08-13T07:09:30.174482665Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:09:30.174638 dockerd[1671]: time="2025-08-13T07:09:30.174614412Z" level=info msg="Daemon has completed initialization" Aug 13 07:09:30.286476 dockerd[1671]: time="2025-08-13T07:09:30.286361157Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:09:30.286608 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:09:31.053563 containerd[1471]: time="2025-08-13T07:09:31.053480312Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:09:31.805453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152385148.mount: Deactivated successfully. Aug 13 07:09:33.015374 containerd[1471]: time="2025-08-13T07:09:33.015301165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:33.015940 containerd[1471]: time="2025-08-13T07:09:33.015897042Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:09:33.017314 containerd[1471]: time="2025-08-13T07:09:33.017265297Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:33.021114 containerd[1471]: time="2025-08-13T07:09:33.021038340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:33.023366 containerd[1471]: time="2025-08-13T07:09:33.023244326Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.969657915s" Aug 13 07:09:33.023471 containerd[1471]: time="2025-08-13T07:09:33.023397713Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:09:33.025312 containerd[1471]: time="2025-08-13T07:09:33.025224458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:09:34.670537 containerd[1471]: time="2025-08-13T07:09:34.670465066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.671179 containerd[1471]: time="2025-08-13T07:09:34.671139601Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:09:34.672447 containerd[1471]: time="2025-08-13T07:09:34.672377020Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.675160 containerd[1471]: time="2025-08-13T07:09:34.675106657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:34.676242 containerd[1471]: time="2025-08-13T07:09:34.676196781Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.650888566s" Aug 13 07:09:34.676242 containerd[1471]: time="2025-08-13T07:09:34.676233310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:09:34.676745 containerd[1471]: time="2025-08-13T07:09:34.676698041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:09:35.864083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:09:35.875199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:36.457264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:36.463619 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:37.966050 kubelet[1892]: E0813 07:09:37.965954 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:37.972598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:37.972872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:38.275665 containerd[1471]: time="2025-08-13T07:09:38.275512492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:38.348422 containerd[1471]: time="2025-08-13T07:09:38.348275136Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:09:38.357867 containerd[1471]: time="2025-08-13T07:09:38.357770908Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:38.361783 containerd[1471]: time="2025-08-13T07:09:38.361720412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:38.362759 containerd[1471]: time="2025-08-13T07:09:38.362729764Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 3.685993121s" Aug 13 07:09:38.362835 containerd[1471]: time="2025-08-13T07:09:38.362763637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:09:38.363538 containerd[1471]: time="2025-08-13T07:09:38.363487234Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:09:45.303527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917762726.mount: Deactivated successfully. Aug 13 07:09:45.931659 containerd[1471]: time="2025-08-13T07:09:45.931557022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:45.932329 containerd[1471]: time="2025-08-13T07:09:45.932295075Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:09:45.933629 containerd[1471]: time="2025-08-13T07:09:45.933591165Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:45.935804 containerd[1471]: time="2025-08-13T07:09:45.935740785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:45.936340 containerd[1471]: time="2025-08-13T07:09:45.936259898Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 7.572727299s" Aug 13 07:09:45.936340 containerd[1471]: time="2025-08-13T07:09:45.936311595Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:09:45.936906 containerd[1471]: time="2025-08-13T07:09:45.936881944Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:09:46.407343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495039329.mount: Deactivated successfully. Aug 13 07:09:47.593073 containerd[1471]: time="2025-08-13T07:09:47.592986038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:47.594487 containerd[1471]: time="2025-08-13T07:09:47.594452757Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:09:47.596077 containerd[1471]: time="2025-08-13T07:09:47.595964100Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:47.599661 containerd[1471]: time="2025-08-13T07:09:47.599616057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:47.601086 containerd[1471]: time="2025-08-13T07:09:47.601033283Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.664115553s" Aug 13 07:09:47.601086 containerd[1471]: time="2025-08-13T07:09:47.601081113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:09:47.601686 containerd[1471]: time="2025-08-13T07:09:47.601654338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:09:48.223324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:09:48.232157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:48.514385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:48.521303 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:48.741641 kubelet[1969]: E0813 07:09:48.741550 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:48.746299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:48.746524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:48.937371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195691129.mount: Deactivated successfully. Aug 13 07:09:48.945192 containerd[1471]: time="2025-08-13T07:09:48.945146195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:48.945855 containerd[1471]: time="2025-08-13T07:09:48.945801964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:09:48.946962 containerd[1471]: time="2025-08-13T07:09:48.946926783Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:48.949284 containerd[1471]: time="2025-08-13T07:09:48.949247303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:48.949926 containerd[1471]: time="2025-08-13T07:09:48.949872325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.348183943s" Aug 13 07:09:48.949926 containerd[1471]: time="2025-08-13T07:09:48.949920084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:09:48.950479 containerd[1471]: time="2025-08-13T07:09:48.950453544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:09:49.596766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533173807.mount: Deactivated successfully. Aug 13 07:09:52.581590 containerd[1471]: time="2025-08-13T07:09:52.581503576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:52.582348 containerd[1471]: time="2025-08-13T07:09:52.582265435Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:09:52.584045 containerd[1471]: time="2025-08-13T07:09:52.583998123Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:52.587187 containerd[1471]: time="2025-08-13T07:09:52.587144140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:52.588588 containerd[1471]: time="2025-08-13T07:09:52.588541009Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.638040797s" Aug 13 07:09:52.588649 containerd[1471]: time="2025-08-13T07:09:52.588591754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:09:54.610647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:54.627277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:54.652737 systemd[1]: Reloading requested from client PID 2065 ('systemctl') (unit session-7.scope)... Aug 13 07:09:54.652757 systemd[1]: Reloading... Aug 13 07:09:54.737009 zram_generator::config[2107]: No configuration found. Aug 13 07:09:54.958283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:55.037155 systemd[1]: Reloading finished in 383 ms. Aug 13 07:09:55.085993 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:09:55.086093 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:09:55.086390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:55.089124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:55.268020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:55.272747 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:09:55.319765 kubelet[2153]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:55.319765 kubelet[2153]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:09:55.319765 kubelet[2153]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:55.320345 kubelet[2153]: I0813 07:09:55.319808 2153 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:09:55.827317 kubelet[2153]: I0813 07:09:55.827258 2153 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:09:55.827317 kubelet[2153]: I0813 07:09:55.827303 2153 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:09:55.827692 kubelet[2153]: I0813 07:09:55.827667 2153 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:09:55.849925 kubelet[2153]: I0813 07:09:55.849859 2153 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:09:55.850867 kubelet[2153]: E0813 07:09:55.850829 2153 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:55.860310 kubelet[2153]: E0813 07:09:55.860263 2153 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:09:55.860310 kubelet[2153]: I0813 07:09:55.860291 2153 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:09:55.867586 kubelet[2153]: I0813 07:09:55.867544 2153 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:09:55.868249 kubelet[2153]: I0813 07:09:55.868212 2153 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:09:55.868442 kubelet[2153]: I0813 07:09:55.868388 2153 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:09:55.868615 kubelet[2153]: I0813 07:09:55.868432 2153 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:09:55.868746 kubelet[2153]: I0813 07:09:55.868634 2153 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:09:55.868746 kubelet[2153]: I0813 07:09:55.868644 2153 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:09:55.868813 kubelet[2153]: I0813 07:09:55.868780 2153 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:55.871270 kubelet[2153]: I0813 07:09:55.871220 2153 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:09:55.871270 kubelet[2153]: I0813 07:09:55.871250 2153 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:09:55.871476 kubelet[2153]: I0813 07:09:55.871293 2153 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:09:55.871476 kubelet[2153]: I0813 07:09:55.871318 2153 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:09:55.874427 kubelet[2153]: W0813 07:09:55.874287 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:55.874427 kubelet[2153]: E0813 07:09:55.874364 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:55.874957 kubelet[2153]: W0813 07:09:55.874898 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:55.875064 kubelet[2153]: E0813 07:09:55.875009 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:55.876149 kubelet[2153]: I0813 07:09:55.876126 2153 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:09:55.876586 kubelet[2153]: I0813 07:09:55.876561 2153 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:09:55.877231 kubelet[2153]: W0813 07:09:55.877205 2153 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:09:55.879208 kubelet[2153]: I0813 07:09:55.879170 2153 server.go:1274] "Started kubelet" Aug 13 07:09:55.880201 kubelet[2153]: I0813 07:09:55.879261 2153 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:09:55.880201 kubelet[2153]: I0813 07:09:55.879341 2153 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:09:55.880201 kubelet[2153]: I0813 07:09:55.879879 2153 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:09:55.880351 kubelet[2153]: I0813 07:09:55.880320 2153 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:09:55.881212 kubelet[2153]: I0813 07:09:55.881175 2153 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:09:55.882707 kubelet[2153]: I0813 07:09:55.882679 2153 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:09:55.885480 kubelet[2153]: I0813 07:09:55.885019 2153 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:09:55.885480 kubelet[2153]: I0813 07:09:55.885127 2153 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:09:55.885480 kubelet[2153]: I0813 07:09:55.885184 2153 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:09:55.886427 kubelet[2153]: W0813 07:09:55.885474 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:55.886427 kubelet[2153]: E0813 07:09:55.885742 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:55.886427 kubelet[2153]: E0813 07:09:55.885772 2153 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:09:55.886427 kubelet[2153]: I0813 07:09:55.885988 2153 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:09:55.886427 kubelet[2153]: E0813 07:09:55.886306 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:09:55.886427 kubelet[2153]: E0813 07:09:55.886385 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="200ms" Aug 13 07:09:55.886636 kubelet[2153]: E0813 07:09:55.885356 2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b41f3f7ad6c42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:09:55.879136322 +0000 UTC m=+0.601514264,LastTimestamp:2025-08-13 07:09:55.879136322 +0000 UTC m=+0.601514264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:09:55.887726 kubelet[2153]: I0813 07:09:55.887707 2153 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:09:55.887726 kubelet[2153]: I0813 07:09:55.887722 2153 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:09:55.967079 kubelet[2153]: I0813 07:09:55.967015 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:09:55.968828 kubelet[2153]: I0813 07:09:55.968795 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:09:55.968879 kubelet[2153]: I0813 07:09:55.968834 2153 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:09:55.968879 kubelet[2153]: I0813 07:09:55.968859 2153 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:09:55.968879 kubelet[2153]: I0813 07:09:55.968868 2153 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:09:55.968935 kubelet[2153]: I0813 07:09:55.968886 2153 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:09:55.968935 kubelet[2153]: I0813 07:09:55.968908 2153 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:55.968935 kubelet[2153]: E0813 07:09:55.968902 2153 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:09:55.970529 kubelet[2153]: W0813 07:09:55.969656 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:55.970529 kubelet[2153]: E0813 07:09:55.969716 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:55.986737 kubelet[2153]: E0813 07:09:55.986684 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:09:56.070066 kubelet[2153]: E0813 07:09:56.069960 2153 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:09:56.087521 kubelet[2153]: E0813 07:09:56.087335 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:09:56.088010 kubelet[2153]: E0813 07:09:56.087933 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="400ms" Aug 13 07:09:56.187870 kubelet[2153]: E0813 07:09:56.187803 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:09:56.271167 kubelet[2153]: E0813 07:09:56.271099 2153 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:09:56.288924 kubelet[2153]: E0813 07:09:56.288880 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:09:56.353320 kubelet[2153]: I0813 07:09:56.353182 2153 policy_none.go:49] "None policy: Start" Aug 13 07:09:56.354267 kubelet[2153]: I0813 07:09:56.354235 2153 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:09:56.354267 kubelet[2153]: I0813 07:09:56.354269 2153 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:09:56.360113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:09:56.378745 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:09:56.381720 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:09:56.389926 kubelet[2153]: E0813 07:09:56.389894 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:09:56.392911 kubelet[2153]: I0813 07:09:56.392879 2153 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:09:56.393191 kubelet[2153]: I0813 07:09:56.393156 2153 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:09:56.393237 kubelet[2153]: I0813 07:09:56.393178 2153 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:09:56.393559 kubelet[2153]: I0813 07:09:56.393427 2153 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:09:56.394865 kubelet[2153]: E0813 07:09:56.394841 2153 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:09:56.488943 kubelet[2153]: E0813 07:09:56.488885 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="800ms" Aug 13 07:09:56.495432 kubelet[2153]: I0813 07:09:56.495387 2153 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:09:56.495733 kubelet[2153]: E0813 07:09:56.495707 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Aug 13 07:09:56.681696 systemd[1]: Created slice kubepods-burstable-pod297a949d7b2fb6ab415511ef9f169b4e.slice - libcontainer container kubepods-burstable-pod297a949d7b2fb6ab415511ef9f169b4e.slice. Aug 13 07:09:56.689402 kubelet[2153]: I0813 07:09:56.689346 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/297a949d7b2fb6ab415511ef9f169b4e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"297a949d7b2fb6ab415511ef9f169b4e\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:09:56.689402 kubelet[2153]: I0813 07:09:56.689389 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:09:56.689402 kubelet[2153]: I0813 07:09:56.689408 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/297a949d7b2fb6ab415511ef9f169b4e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"297a949d7b2fb6ab415511ef9f169b4e\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:09:56.689650 kubelet[2153]: I0813 07:09:56.689428 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/297a949d7b2fb6ab415511ef9f169b4e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"297a949d7b2fb6ab415511ef9f169b4e\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:09:56.689650 kubelet[2153]: I0813 07:09:56.689446 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:09:56.689650 kubelet[2153]: I0813 07:09:56.689464 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:09:56.689650 kubelet[2153]: I0813 07:09:56.689481 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:09:56.689650 kubelet[2153]: I0813 07:09:56.689499 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:09:56.689807 kubelet[2153]: I0813 07:09:56.689518 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:09:56.697953 kubelet[2153]: I0813 07:09:56.697901 2153 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:09:56.698264 kubelet[2153]: E0813 07:09:56.698235 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Aug 13 07:09:56.705598 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 13 07:09:56.724997 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 13 07:09:57.002928 kubelet[2153]: E0813 07:09:57.002771 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:09:57.003813 containerd[1471]: time="2025-08-13T07:09:57.003762132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:297a949d7b2fb6ab415511ef9f169b4e,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:57.008061 kubelet[2153]: E0813 07:09:57.008038 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:09:57.008376 containerd[1471]: time="2025-08-13T07:09:57.008339776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:57.013040 kubelet[2153]: W0813 07:09:57.012949 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:57.013040 kubelet[2153]: E0813 07:09:57.013036 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:57.027564 kubelet[2153]: E0813 07:09:57.027510 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:09:57.028020 containerd[1471]: time="2025-08-13T07:09:57.027992333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:57.099643 kubelet[2153]: I0813 07:09:57.099564 2153 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:09:57.100094 kubelet[2153]: E0813 07:09:57.100034 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Aug 13 07:09:57.196939 kubelet[2153]: W0813 07:09:57.196894 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:57.197073 kubelet[2153]: E0813 07:09:57.196949 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:57.290491 kubelet[2153]: E0813 07:09:57.290421 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="1.6s" Aug 13 07:09:57.312438 kubelet[2153]: W0813 07:09:57.312350 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:57.312438 kubelet[2153]: E0813 07:09:57.312434 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:57.475640 kubelet[2153]: W0813 07:09:57.475545 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:57.476100 kubelet[2153]: E0813 07:09:57.475650 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:57.862545 kubelet[2153]: E0813 07:09:57.862433 2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b41f3f7ad6c42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:09:55.879136322 +0000 UTC m=+0.601514264,LastTimestamp:2025-08-13 07:09:55.879136322 +0000 UTC m=+0.601514264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:09:57.874731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925170060.mount: Deactivated successfully. Aug 13 07:09:57.901756 kubelet[2153]: I0813 07:09:57.901710 2153 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:09:57.902116 kubelet[2153]: E0813 07:09:57.902068 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Aug 13 07:09:58.010756 kubelet[2153]: E0813 07:09:58.010708 2153 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:58.085583 containerd[1471]: time="2025-08-13T07:09:58.085525385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:58.195068 containerd[1471]: time="2025-08-13T07:09:58.194907081Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:58.255735 containerd[1471]: time="2025-08-13T07:09:58.255643114Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:09:58.385099 containerd[1471]: time="2025-08-13T07:09:58.385019595Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:58.415198 containerd[1471]: time="2025-08-13T07:09:58.415117915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:09:58.457652 containerd[1471]: time="2025-08-13T07:09:58.457480126Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:58.486767 containerd[1471]: time="2025-08-13T07:09:58.486685640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:09:58.615954 containerd[1471]: time="2025-08-13T07:09:58.615884860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:58.616788 containerd[1471]: time="2025-08-13T07:09:58.616752539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.612895184s" Aug 13 07:09:58.617653 containerd[1471]: time="2025-08-13T07:09:58.617611813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.609189448s" Aug 13 07:09:58.811547 containerd[1471]: time="2025-08-13T07:09:58.811502039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.783457887s" Aug 13 07:09:58.891052 kubelet[2153]: E0813 07:09:58.890963 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="3.2s" Aug 13 07:09:59.010286 kubelet[2153]: W0813 07:09:59.010236 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Aug 13 07:09:59.010538 kubelet[2153]: E0813 07:09:59.010497 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:09:59.181020 containerd[1471]: time="2025-08-13T07:09:59.180259869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:59.181020 containerd[1471]: time="2025-08-13T07:09:59.180492353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:59.181020 containerd[1471]: time="2025-08-13T07:09:59.180507150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.181020 containerd[1471]: time="2025-08-13T07:09:59.180663169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.189715 containerd[1471]: time="2025-08-13T07:09:59.189383542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:59.189715 containerd[1471]: time="2025-08-13T07:09:59.189450841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:59.189715 containerd[1471]: time="2025-08-13T07:09:59.189472072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.189715 containerd[1471]: time="2025-08-13T07:09:59.189601839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.310539 systemd[1]: Started cri-containerd-2da66dcd4aac6eba388df62091e743ae7db680ce1bf5f8df7f429986fc7673b3.scope - libcontainer container 2da66dcd4aac6eba388df62091e743ae7db680ce1bf5f8df7f429986fc7673b3. Aug 13 07:09:59.316202 systemd[1]: Started cri-containerd-02870052a75af42767b0292031b076910ab964afe1bc502c0d83a2a8e1d31244.scope - libcontainer container 02870052a75af42767b0292031b076910ab964afe1bc502c0d83a2a8e1d31244. Aug 13 07:09:59.326299 containerd[1471]: time="2025-08-13T07:09:59.326159731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:59.326299 containerd[1471]: time="2025-08-13T07:09:59.326219495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:59.326299 containerd[1471]: time="2025-08-13T07:09:59.326232760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.326610 containerd[1471]: time="2025-08-13T07:09:59.326314967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:59.399339 systemd[1]: Started cri-containerd-5577eff4f12de8eb4123a6ad85dc744b50461be19f5fd892b9779e9fa9e4f351.scope - libcontainer container 5577eff4f12de8eb4123a6ad85dc744b50461be19f5fd892b9779e9fa9e4f351. Aug 13 07:09:59.403721 containerd[1471]: time="2025-08-13T07:09:59.403617095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:297a949d7b2fb6ab415511ef9f169b4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2da66dcd4aac6eba388df62091e743ae7db680ce1bf5f8df7f429986fc7673b3\"" Aug 13 07:09:59.405134 kubelet[2153]: E0813 07:09:59.405074 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:09:59.410042 containerd[1471]: time="2025-08-13T07:09:59.410011337Z" level=info msg="CreateContainer within sandbox \"2da66dcd4aac6eba388df62091e743ae7db680ce1bf5f8df7f429986fc7673b3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:09:59.410496 containerd[1471]: time="2025-08-13T07:09:59.410307353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"02870052a75af42767b0292031b076910ab964afe1bc502c0d83a2a8e1d31244\"" Aug 13 07:09:59.411969 kubelet[2153]: E0813 07:09:59.411940 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:09:59.414272 containerd[1471]: time="2025-08-13T07:09:59.414247727Z" level=info msg="CreateContainer within sandbox \"02870052a75af42767b0292031b076910ab964afe1bc502c0d83a2a8e1d31244\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:09:59.427742 containerd[1471]: time="2025-08-13T07:09:59.427688967Z" level=info msg="CreateContainer within sandbox \"2da66dcd4aac6eba388df62091e743ae7db680ce1bf5f8df7f429986fc7673b3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07783eaf37c627d0421273465488ba466c663ba40912b51f567eec66ca24cf62\"" Aug 13 07:09:59.429264 containerd[1471]: time="2025-08-13T07:09:59.428225902Z" level=info msg="StartContainer for \"07783eaf37c627d0421273465488ba466c663ba40912b51f567eec66ca24cf62\"" Aug 13 07:09:59.437678 containerd[1471]: time="2025-08-13T07:09:59.437556000Z" level=info msg="CreateContainer within sandbox \"02870052a75af42767b0292031b076910ab964afe1bc502c0d83a2a8e1d31244\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5438857920fe009212ec8ee6892ea2010ab5ce1068e763ef279332048e813207\"" Aug 13 07:09:59.439479 containerd[1471]: time="2025-08-13T07:09:59.439296925Z" level=info msg="StartContainer for \"5438857920fe009212ec8ee6892ea2010ab5ce1068e763ef279332048e813207\"" Aug 13 07:09:59.452216 containerd[1471]: time="2025-08-13T07:09:59.452149280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5577eff4f12de8eb4123a6ad85dc744b50461be19f5fd892b9779e9fa9e4f351\"" Aug 13 07:09:59.453040 kubelet[2153]: E0813 07:09:59.453012 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:09:59.454892 containerd[1471]: time="2025-08-13T07:09:59.454860798Z" level=info msg="CreateContainer within sandbox \"5577eff4f12de8eb4123a6ad85dc744b50461be19f5fd892b9779e9fa9e4f351\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:09:59.458138 systemd[1]: Started cri-containerd-07783eaf37c627d0421273465488ba466c663ba40912b51f567eec66ca24cf62.scope - libcontainer container 07783eaf37c627d0421273465488ba466c663ba40912b51f567eec66ca24cf62. Aug 13 07:09:59.470144 systemd[1]: Started cri-containerd-5438857920fe009212ec8ee6892ea2010ab5ce1068e763ef279332048e813207.scope - libcontainer container 5438857920fe009212ec8ee6892ea2010ab5ce1068e763ef279332048e813207. Aug 13 07:09:59.474073 containerd[1471]: time="2025-08-13T07:09:59.473954382Z" level=info msg="CreateContainer within sandbox \"5577eff4f12de8eb4123a6ad85dc744b50461be19f5fd892b9779e9fa9e4f351\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"14c3c47847e1540ed5d5939f233c29fa66be01425280bbbdc04a15ee0a583d38\"" Aug 13 07:09:59.475386 containerd[1471]: time="2025-08-13T07:09:59.475278421Z" level=info msg="StartContainer for \"14c3c47847e1540ed5d5939f233c29fa66be01425280bbbdc04a15ee0a583d38\"" Aug 13 07:09:59.504105 kubelet[2153]: I0813 07:09:59.504022 2153 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:09:59.504398 kubelet[2153]: E0813 07:09:59.504366 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Aug 13 07:09:59.509308 systemd[1]: Started cri-containerd-14c3c47847e1540ed5d5939f233c29fa66be01425280bbbdc04a15ee0a583d38.scope - libcontainer container 14c3c47847e1540ed5d5939f233c29fa66be01425280bbbdc04a15ee0a583d38. Aug 13 07:09:59.515215 containerd[1471]: time="2025-08-13T07:09:59.515117062Z" level=info msg="StartContainer for \"07783eaf37c627d0421273465488ba466c663ba40912b51f567eec66ca24cf62\" returns successfully" Aug 13 07:09:59.527729 containerd[1471]: time="2025-08-13T07:09:59.527591746Z" level=info msg="StartContainer for \"5438857920fe009212ec8ee6892ea2010ab5ce1068e763ef279332048e813207\" returns successfully" Aug 13 07:09:59.558501 containerd[1471]: time="2025-08-13T07:09:59.558448155Z" level=info msg="StartContainer for \"14c3c47847e1540ed5d5939f233c29fa66be01425280bbbdc04a15ee0a583d38\" returns successfully" Aug 13 07:10:00.019024 kubelet[2153]: E0813 07:10:00.018235 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:00.022769 kubelet[2153]: E0813 07:10:00.022737 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:00.029053 kubelet[2153]: E0813 07:10:00.029016 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:00.891088 kubelet[2153]: I0813 07:10:00.891031 2153 apiserver.go:52] "Watching apiserver" Aug 13 07:10:00.968212 kubelet[2153]: E0813 07:10:00.968181 2153 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:10:00.985678 kubelet[2153]: I0813 07:10:00.985625 2153 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:10:01.028213 kubelet[2153]: E0813 07:10:01.028180 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:01.322043 kubelet[2153]: E0813 07:10:01.322005 2153 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:10:01.757367 kubelet[2153]: E0813 07:10:01.757216 2153 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:10:02.095410 kubelet[2153]: E0813 07:10:02.095332 2153 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:10:02.674041 kubelet[2153]: E0813 07:10:02.673993 2153 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:10:02.706934 kubelet[2153]: I0813 07:10:02.706892 2153 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:10:02.722007 kubelet[2153]: I0813 07:10:02.720038 2153 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:10:02.722007 kubelet[2153]: E0813 07:10:02.720105 2153 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:10:02.886058 systemd[1]: Reloading requested from client PID 2432 ('systemctl') (unit session-7.scope)... Aug 13 07:10:02.886081 systemd[1]: Reloading... Aug 13 07:10:02.979021 zram_generator::config[2471]: No configuration found. Aug 13 07:10:03.164347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:10:03.258574 systemd[1]: Reloading finished in 371 ms. Aug 13 07:10:03.311058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:10:03.337840 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:10:03.338172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:10:03.338229 systemd[1]: kubelet.service: Consumed 1.291s CPU time, 137.1M memory peak, 0B memory swap peak. Aug 13 07:10:03.350224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:10:03.524194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:10:03.529930 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:10:03.576441 kubelet[2516]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:10:03.576441 kubelet[2516]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:10:03.576441 kubelet[2516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:10:03.576922 kubelet[2516]: I0813 07:10:03.576733 2516 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:10:03.584422 kubelet[2516]: I0813 07:10:03.584376 2516 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:10:03.584422 kubelet[2516]: I0813 07:10:03.584413 2516 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:10:03.584751 kubelet[2516]: I0813 07:10:03.584724 2516 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:10:03.586148 kubelet[2516]: I0813 07:10:03.586122 2516 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:10:03.588170 kubelet[2516]: I0813 07:10:03.587966 2516 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:10:03.590856 kubelet[2516]: E0813 07:10:03.590818 2516 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:10:03.590935 kubelet[2516]: I0813 07:10:03.590856 2516 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:10:03.596625 kubelet[2516]: I0813 07:10:03.596589 2516 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:10:03.596750 kubelet[2516]: I0813 07:10:03.596732 2516 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:10:03.596917 kubelet[2516]: I0813 07:10:03.596884 2516 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:10:03.597120 kubelet[2516]: I0813 07:10:03.596914 2516 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:10:03.597198 kubelet[2516]: I0813 07:10:03.597129 2516 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:10:03.597198 kubelet[2516]: I0813 07:10:03.597139 2516 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:10:03.597198 kubelet[2516]: I0813 07:10:03.597166 2516 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:10:03.597344 kubelet[2516]: I0813 07:10:03.597316 2516 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:10:03.597344 kubelet[2516]: I0813 07:10:03.597340 2516 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:10:03.598054 kubelet[2516]: I0813 07:10:03.597373 2516 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:10:03.598054 kubelet[2516]: I0813 07:10:03.597384 2516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:10:03.598568 kubelet[2516]: I0813 07:10:03.598539 2516 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:10:03.599129 kubelet[2516]: I0813 07:10:03.599101 2516 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:10:03.604452 kubelet[2516]: I0813 07:10:03.599720 2516 server.go:1274] "Started kubelet" Aug 13 07:10:03.604452 kubelet[2516]: I0813 07:10:03.600111 2516 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:10:03.604452 kubelet[2516]: I0813 07:10:03.601795 2516 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:10:03.604452 kubelet[2516]: I0813 07:10:03.602143 2516 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:10:03.604923 kubelet[2516]: I0813 07:10:03.604891 2516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:10:03.606462 kubelet[2516]: I0813 07:10:03.606431 2516 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:10:03.607929 kubelet[2516]: I0813 07:10:03.607900 2516 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:10:03.608095 kubelet[2516]: I0813 07:10:03.607917 2516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:10:03.608184 kubelet[2516]: I0813 07:10:03.608156 2516 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:10:03.608932 kubelet[2516]: I0813 07:10:03.608033 2516 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:10:03.726768 kubelet[2516]: I0813 07:10:03.726730 2516 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:10:03.726883 kubelet[2516]: I0813 07:10:03.726851 2516 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:10:03.728123 kubelet[2516]: E0813 07:10:03.728086 2516 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:10:03.728751 kubelet[2516]: I0813 07:10:03.728730 2516 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:10:03.736599 kubelet[2516]: I0813 07:10:03.736520 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:10:03.738568 kubelet[2516]: I0813 07:10:03.738112 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:10:03.738568 kubelet[2516]: I0813 07:10:03.738145 2516 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:10:03.738568 kubelet[2516]: I0813 07:10:03.738167 2516 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:10:03.738568 kubelet[2516]: E0813 07:10:03.738207 2516 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:10:03.768964 kubelet[2516]: I0813 07:10:03.768930 2516 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:10:03.768964 kubelet[2516]: I0813 07:10:03.768947 2516 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:10:03.768964 kubelet[2516]: I0813 07:10:03.768990 2516 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:10:03.769183 kubelet[2516]: I0813 07:10:03.769165 2516 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:10:03.769206 kubelet[2516]: I0813 07:10:03.769178 2516 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:10:03.769206 kubelet[2516]: I0813 07:10:03.769197 2516 policy_none.go:49] "None policy: Start" Aug 13 07:10:03.769789 kubelet[2516]: I0813 07:10:03.769764 2516 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:10:03.769864 kubelet[2516]: I0813 07:10:03.769796 2516 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:10:03.769994 kubelet[2516]: I0813 07:10:03.769957 2516 state_mem.go:75] "Updated machine memory state" Aug 13 07:10:03.774855 kubelet[2516]: I0813 07:10:03.774667 2516 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:10:03.774855 kubelet[2516]: I0813 07:10:03.774849 2516 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:10:03.774964 kubelet[2516]: I0813 07:10:03.774860 2516 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:10:03.775280 kubelet[2516]: I0813 07:10:03.775189 2516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:10:03.881331 kubelet[2516]: I0813 07:10:03.881243 2516 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:10:03.887422 kubelet[2516]: I0813 07:10:03.887365 2516 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 07:10:03.887581 kubelet[2516]: I0813 07:10:03.887450 2516 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:10:03.920390 kubelet[2516]: I0813 07:10:03.920341 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:10:03.920390 kubelet[2516]: I0813 07:10:03.920382 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:10:03.920594 kubelet[2516]: I0813 07:10:03.920410 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:10:03.920594 kubelet[2516]: I0813 07:10:03.920468 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/297a949d7b2fb6ab415511ef9f169b4e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"297a949d7b2fb6ab415511ef9f169b4e\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:10:03.920594 kubelet[2516]: I0813 07:10:03.920485 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/297a949d7b2fb6ab415511ef9f169b4e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"297a949d7b2fb6ab415511ef9f169b4e\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:10:03.920594 kubelet[2516]: I0813 07:10:03.920500 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/297a949d7b2fb6ab415511ef9f169b4e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"297a949d7b2fb6ab415511ef9f169b4e\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:10:03.920594 kubelet[2516]: I0813 07:10:03.920514 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:10:03.920710 kubelet[2516]: I0813 07:10:03.920529 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:10:03.920710 kubelet[2516]: I0813 07:10:03.920543 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:10:04.157175 kubelet[2516]: E0813 07:10:04.156931 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:04.157175 kubelet[2516]: E0813 07:10:04.157060 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:04.158147 kubelet[2516]: E0813 07:10:04.158080 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:04.598589 kubelet[2516]: I0813 07:10:04.598538 2516 apiserver.go:52] "Watching apiserver" Aug 13 07:10:04.609855 kubelet[2516]: I0813 07:10:04.609817 2516 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:10:04.753542 kubelet[2516]: E0813 07:10:04.753333 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:04.753542 kubelet[2516]: E0813 07:10:04.753333 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:04.753542 kubelet[2516]: E0813 07:10:04.753392 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:04.894253 kubelet[2516]: I0813 07:10:04.894012 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.893964269 podStartE2EDuration="1.893964269s" podCreationTimestamp="2025-08-13 07:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:04.86487619 +0000 UTC m=+1.329782179" watchObservedRunningTime="2025-08-13 07:10:04.893964269 +0000 UTC m=+1.358870258" Aug 13 07:10:04.902571 kubelet[2516]: I0813 07:10:04.902402 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.902381613 podStartE2EDuration="1.902381613s" podCreationTimestamp="2025-08-13 07:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:04.902381884 +0000 UTC m=+1.367287883" watchObservedRunningTime="2025-08-13 07:10:04.902381613 +0000 UTC m=+1.367287602" Aug 13 07:10:04.902571 kubelet[2516]: I0813 07:10:04.902475 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.902470091 podStartE2EDuration="1.902470091s" podCreationTimestamp="2025-08-13 07:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:04.894198364 +0000 UTC m=+1.359104353" watchObservedRunningTime="2025-08-13 07:10:04.902470091 +0000 UTC m=+1.367376080" Aug 13 07:10:05.754863 kubelet[2516]: E0813 07:10:05.754809 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:07.039717 kubelet[2516]: E0813 07:10:07.039658 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:08.086174 update_engine[1460]: I20250813 07:10:08.086069 1460 update_attempter.cc:509] Updating boot flags... Aug 13 07:10:08.122092 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2573) Aug 13 07:10:08.166328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2577) Aug 13 07:10:08.192011 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2577) Aug 13 07:10:08.300742 kubelet[2516]: I0813 07:10:08.300701 2516 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:10:08.301235 containerd[1471]: time="2025-08-13T07:10:08.301152063Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:10:08.301489 kubelet[2516]: I0813 07:10:08.301431 2516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:10:09.217671 systemd[1]: Created slice kubepods-besteffort-pod3425735e_a13b_4b87_bdd0_cad127d951fe.slice - libcontainer container kubepods-besteffort-pod3425735e_a13b_4b87_bdd0_cad127d951fe.slice. Aug 13 07:10:09.248765 kubelet[2516]: I0813 07:10:09.248703 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3425735e-a13b-4b87-bdd0-cad127d951fe-kube-proxy\") pod \"kube-proxy-vjnx9\" (UID: \"3425735e-a13b-4b87-bdd0-cad127d951fe\") " pod="kube-system/kube-proxy-vjnx9" Aug 13 07:10:09.248765 kubelet[2516]: I0813 07:10:09.248752 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9858\" (UniqueName: \"kubernetes.io/projected/3425735e-a13b-4b87-bdd0-cad127d951fe-kube-api-access-m9858\") pod \"kube-proxy-vjnx9\" (UID: \"3425735e-a13b-4b87-bdd0-cad127d951fe\") " pod="kube-system/kube-proxy-vjnx9" Aug 13 07:10:09.248765 kubelet[2516]: I0813 07:10:09.248771 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3425735e-a13b-4b87-bdd0-cad127d951fe-xtables-lock\") pod \"kube-proxy-vjnx9\" (UID: \"3425735e-a13b-4b87-bdd0-cad127d951fe\") " pod="kube-system/kube-proxy-vjnx9" Aug 13 07:10:09.248991 kubelet[2516]: I0813 07:10:09.248790 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3425735e-a13b-4b87-bdd0-cad127d951fe-lib-modules\") pod \"kube-proxy-vjnx9\" (UID: \"3425735e-a13b-4b87-bdd0-cad127d951fe\") " pod="kube-system/kube-proxy-vjnx9" Aug 13 07:10:09.270655 kubelet[2516]: E0813 07:10:09.270612 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:09.353126 kubelet[2516]: E0813 07:10:09.353088 2516 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:10:09.353126 kubelet[2516]: E0813 07:10:09.353122 2516 projected.go:194] Error preparing data for projected volume kube-api-access-m9858 for pod kube-system/kube-proxy-vjnx9: configmap "kube-root-ca.crt" not found Aug 13 07:10:09.353579 kubelet[2516]: E0813 07:10:09.353192 2516 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3425735e-a13b-4b87-bdd0-cad127d951fe-kube-api-access-m9858 podName:3425735e-a13b-4b87-bdd0-cad127d951fe nodeName:}" failed. No retries permitted until 2025-08-13 07:10:09.85316215 +0000 UTC m=+6.318068129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m9858" (UniqueName: "kubernetes.io/projected/3425735e-a13b-4b87-bdd0-cad127d951fe-kube-api-access-m9858") pod "kube-proxy-vjnx9" (UID: "3425735e-a13b-4b87-bdd0-cad127d951fe") : configmap "kube-root-ca.crt" not found Aug 13 07:10:09.478427 systemd[1]: Created slice kubepods-besteffort-pod749a214f_7a7e_4879_8893_b0b0fa3b8493.slice - libcontainer container kubepods-besteffort-pod749a214f_7a7e_4879_8893_b0b0fa3b8493.slice. Aug 13 07:10:09.550702 kubelet[2516]: I0813 07:10:09.550654 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/749a214f-7a7e-4879-8893-b0b0fa3b8493-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-rjchk\" (UID: \"749a214f-7a7e-4879-8893-b0b0fa3b8493\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rjchk" Aug 13 07:10:09.550836 kubelet[2516]: I0813 07:10:09.550704 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fpn\" (UniqueName: \"kubernetes.io/projected/749a214f-7a7e-4879-8893-b0b0fa3b8493-kube-api-access-55fpn\") pod \"tigera-operator-5bf8dfcb4-rjchk\" (UID: \"749a214f-7a7e-4879-8893-b0b0fa3b8493\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rjchk" Aug 13 07:10:09.760216 kubelet[2516]: E0813 07:10:09.760095 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:09.783105 containerd[1471]: time="2025-08-13T07:10:09.783029383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rjchk,Uid:749a214f-7a7e-4879-8893-b0b0fa3b8493,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:10:09.812729 containerd[1471]: time="2025-08-13T07:10:09.812602687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:09.812729 containerd[1471]: time="2025-08-13T07:10:09.812675726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:09.812729 containerd[1471]: time="2025-08-13T07:10:09.812687538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:09.812957 containerd[1471]: time="2025-08-13T07:10:09.812796234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:09.841131 systemd[1]: Started cri-containerd-7da111d64f6cf44911093667623116ff4e7466ab3b70b817c03999d51446c915.scope - libcontainer container 7da111d64f6cf44911093667623116ff4e7466ab3b70b817c03999d51446c915. Aug 13 07:10:09.879674 containerd[1471]: time="2025-08-13T07:10:09.879633952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rjchk,Uid:749a214f-7a7e-4879-8893-b0b0fa3b8493,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7da111d64f6cf44911093667623116ff4e7466ab3b70b817c03999d51446c915\"" Aug 13 07:10:09.881764 containerd[1471]: time="2025-08-13T07:10:09.881729689Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:10:10.128469 kubelet[2516]: E0813 07:10:10.128425 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:10.129567 containerd[1471]: time="2025-08-13T07:10:10.129526080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjnx9,Uid:3425735e-a13b-4b87-bdd0-cad127d951fe,Namespace:kube-system,Attempt:0,}" Aug 13 07:10:10.155116 containerd[1471]: time="2025-08-13T07:10:10.153928206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:10.155116 containerd[1471]: time="2025-08-13T07:10:10.154948025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:10.155273 containerd[1471]: time="2025-08-13T07:10:10.154999943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:10.155273 containerd[1471]: time="2025-08-13T07:10:10.155148745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:10.180112 systemd[1]: Started cri-containerd-578ed26ab8314f7f44fc4ff26a73f5c57114016a4faf524bc0e31d7008d8da64.scope - libcontainer container 578ed26ab8314f7f44fc4ff26a73f5c57114016a4faf524bc0e31d7008d8da64. Aug 13 07:10:10.207271 containerd[1471]: time="2025-08-13T07:10:10.207230219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjnx9,Uid:3425735e-a13b-4b87-bdd0-cad127d951fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"578ed26ab8314f7f44fc4ff26a73f5c57114016a4faf524bc0e31d7008d8da64\"" Aug 13 07:10:10.208083 kubelet[2516]: E0813 07:10:10.208031 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:10.210657 containerd[1471]: time="2025-08-13T07:10:10.210619530Z" level=info msg="CreateContainer within sandbox \"578ed26ab8314f7f44fc4ff26a73f5c57114016a4faf524bc0e31d7008d8da64\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:10:10.230294 containerd[1471]: time="2025-08-13T07:10:10.230232226Z" level=info msg="CreateContainer within sandbox \"578ed26ab8314f7f44fc4ff26a73f5c57114016a4faf524bc0e31d7008d8da64\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ed7d77a4362800bd2a9fe59a6a52fc25b3321588493563490e409fc9cf401cb\"" Aug 13 07:10:10.230956 containerd[1471]: time="2025-08-13T07:10:10.230919356Z" level=info msg="StartContainer for \"1ed7d77a4362800bd2a9fe59a6a52fc25b3321588493563490e409fc9cf401cb\"" Aug 13 07:10:10.259132 systemd[1]: Started cri-containerd-1ed7d77a4362800bd2a9fe59a6a52fc25b3321588493563490e409fc9cf401cb.scope - libcontainer container 1ed7d77a4362800bd2a9fe59a6a52fc25b3321588493563490e409fc9cf401cb. Aug 13 07:10:10.291708 containerd[1471]: time="2025-08-13T07:10:10.291644723Z" level=info msg="StartContainer for \"1ed7d77a4362800bd2a9fe59a6a52fc25b3321588493563490e409fc9cf401cb\" returns successfully" Aug 13 07:10:10.762897 kubelet[2516]: E0813 07:10:10.762797 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:12.669861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197943534.mount: Deactivated successfully. Aug 13 07:10:12.743309 kubelet[2516]: E0813 07:10:12.743270 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:12.757107 kubelet[2516]: I0813 07:10:12.756777 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vjnx9" podStartSLOduration=3.7567563059999998 podStartE2EDuration="3.756756306s" podCreationTimestamp="2025-08-13 07:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:10.772913898 +0000 UTC m=+7.237819887" watchObservedRunningTime="2025-08-13 07:10:12.756756306 +0000 UTC m=+9.221662295" Aug 13 07:10:12.774691 kubelet[2516]: E0813 07:10:12.770132 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:13.004220 containerd[1471]: time="2025-08-13T07:10:13.004091120Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:13.005014 containerd[1471]: time="2025-08-13T07:10:13.004937539Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:10:13.006123 containerd[1471]: time="2025-08-13T07:10:13.006092660Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:13.010456 containerd[1471]: time="2025-08-13T07:10:13.010370471Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.128597079s" Aug 13 07:10:13.010456 containerd[1471]: time="2025-08-13T07:10:13.010401740Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:10:13.011173 containerd[1471]: time="2025-08-13T07:10:13.011137239Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:13.013581 containerd[1471]: time="2025-08-13T07:10:13.013537222Z" level=info msg="CreateContainer within sandbox \"7da111d64f6cf44911093667623116ff4e7466ab3b70b817c03999d51446c915\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:10:13.026384 containerd[1471]: time="2025-08-13T07:10:13.026334575Z" level=info msg="CreateContainer within sandbox \"7da111d64f6cf44911093667623116ff4e7466ab3b70b817c03999d51446c915\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"22a532f4def81528c26ca1fef3f84fd582c0b7c2e4857a47bb0e3c16034d31e7\"" Aug 13 07:10:13.026901 containerd[1471]: time="2025-08-13T07:10:13.026827576Z" level=info msg="StartContainer for \"22a532f4def81528c26ca1fef3f84fd582c0b7c2e4857a47bb0e3c16034d31e7\"" Aug 13 07:10:13.058136 systemd[1]: Started cri-containerd-22a532f4def81528c26ca1fef3f84fd582c0b7c2e4857a47bb0e3c16034d31e7.scope - libcontainer container 22a532f4def81528c26ca1fef3f84fd582c0b7c2e4857a47bb0e3c16034d31e7. Aug 13 07:10:13.086507 containerd[1471]: time="2025-08-13T07:10:13.086437060Z" level=info msg="StartContainer for \"22a532f4def81528c26ca1fef3f84fd582c0b7c2e4857a47bb0e3c16034d31e7\" returns successfully" Aug 13 07:10:13.771529 kubelet[2516]: E0813 07:10:13.771491 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:17.044337 kubelet[2516]: E0813 07:10:17.044301 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:17.269262 kubelet[2516]: I0813 07:10:17.269165 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-rjchk" podStartSLOduration=5.138109495 podStartE2EDuration="8.269143521s" podCreationTimestamp="2025-08-13 07:10:09 +0000 UTC" firstStartedPulling="2025-08-13 07:10:09.881178976 +0000 UTC m=+6.346084965" lastFinishedPulling="2025-08-13 07:10:13.012213002 +0000 UTC m=+9.477118991" observedRunningTime="2025-08-13 07:10:13.780407499 +0000 UTC m=+10.245313489" watchObservedRunningTime="2025-08-13 07:10:17.269143521 +0000 UTC m=+13.734049510" Aug 13 07:10:19.936520 sudo[1653]: pam_unix(sudo:session): session closed for user root Aug 13 07:10:19.974210 sshd[1650]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:19.984233 systemd[1]: sshd@6-10.0.0.75:22-10.0.0.1:38882.service: Deactivated successfully. Aug 13 07:10:19.986409 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:10:19.986636 systemd[1]: session-7.scope: Consumed 4.556s CPU time, 153.7M memory peak, 0B memory swap peak. Aug 13 07:10:19.987046 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:10:19.987962 systemd-logind[1459]: Removed session 7. Aug 13 07:10:23.187167 systemd[1]: Created slice kubepods-besteffort-pod3ffe0e07_e06c_46e4_b914_80b14929d5cb.slice - libcontainer container kubepods-besteffort-pod3ffe0e07_e06c_46e4_b914_80b14929d5cb.slice. Aug 13 07:10:23.224193 kubelet[2516]: I0813 07:10:23.224135 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ffe0e07-e06c-46e4-b914-80b14929d5cb-typha-certs\") pod \"calico-typha-67766fbd4d-qll64\" (UID: \"3ffe0e07-e06c-46e4-b914-80b14929d5cb\") " pod="calico-system/calico-typha-67766fbd4d-qll64" Aug 13 07:10:23.224193 kubelet[2516]: I0813 07:10:23.224199 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfvv8\" (UniqueName: \"kubernetes.io/projected/3ffe0e07-e06c-46e4-b914-80b14929d5cb-kube-api-access-gfvv8\") pod \"calico-typha-67766fbd4d-qll64\" (UID: \"3ffe0e07-e06c-46e4-b914-80b14929d5cb\") " pod="calico-system/calico-typha-67766fbd4d-qll64" Aug 13 07:10:23.224692 kubelet[2516]: I0813 07:10:23.224229 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ffe0e07-e06c-46e4-b914-80b14929d5cb-tigera-ca-bundle\") pod \"calico-typha-67766fbd4d-qll64\" (UID: \"3ffe0e07-e06c-46e4-b914-80b14929d5cb\") " pod="calico-system/calico-typha-67766fbd4d-qll64" Aug 13 07:10:23.492084 kubelet[2516]: E0813 07:10:23.491851 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:23.492960 containerd[1471]: time="2025-08-13T07:10:23.492720263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67766fbd4d-qll64,Uid:3ffe0e07-e06c-46e4-b914-80b14929d5cb,Namespace:calico-system,Attempt:0,}" Aug 13 07:10:23.525091 containerd[1471]: time="2025-08-13T07:10:23.524731342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:23.525091 containerd[1471]: time="2025-08-13T07:10:23.524813185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:23.525091 containerd[1471]: time="2025-08-13T07:10:23.524843632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:23.525329 containerd[1471]: time="2025-08-13T07:10:23.525196677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:23.550400 systemd[1]: Started cri-containerd-4429161f47c58ddc2aab9cca0be12d501490fc281178b9b5edd9640f76c550fb.scope - libcontainer container 4429161f47c58ddc2aab9cca0be12d501490fc281178b9b5edd9640f76c550fb. Aug 13 07:10:23.580259 systemd[1]: Created slice kubepods-besteffort-pode9d86c20_9bfc_4464_b11e_76f210bf529b.slice - libcontainer container kubepods-besteffort-pode9d86c20_9bfc_4464_b11e_76f210bf529b.slice. Aug 13 07:10:23.602261 containerd[1471]: time="2025-08-13T07:10:23.602216617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67766fbd4d-qll64,Uid:3ffe0e07-e06c-46e4-b914-80b14929d5cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"4429161f47c58ddc2aab9cca0be12d501490fc281178b9b5edd9640f76c550fb\"" Aug 13 07:10:23.606767 kubelet[2516]: E0813 07:10:23.606635 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:23.609142 containerd[1471]: time="2025-08-13T07:10:23.609115541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:10:23.626546 kubelet[2516]: I0813 07:10:23.626110 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-var-run-calico\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626546 kubelet[2516]: I0813 07:10:23.626157 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9d86c20-9bfc-4464-b11e-76f210bf529b-node-certs\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626546 kubelet[2516]: I0813 07:10:23.626187 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-cni-log-dir\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626546 kubelet[2516]: I0813 07:10:23.626242 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d86c20-9bfc-4464-b11e-76f210bf529b-tigera-ca-bundle\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626546 kubelet[2516]: I0813 07:10:23.626267 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-cni-net-dir\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626777 kubelet[2516]: I0813 07:10:23.626284 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-xtables-lock\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626777 kubelet[2516]: I0813 07:10:23.626305 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-lib-modules\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626777 kubelet[2516]: I0813 07:10:23.626336 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-cni-bin-dir\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626777 kubelet[2516]: I0813 07:10:23.626390 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-flexvol-driver-host\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626777 kubelet[2516]: I0813 07:10:23.626411 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-var-lib-calico\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626899 kubelet[2516]: I0813 07:10:23.626428 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9d86c20-9bfc-4464-b11e-76f210bf529b-policysync\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.626899 kubelet[2516]: I0813 07:10:23.626443 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4f8\" (UniqueName: \"kubernetes.io/projected/e9d86c20-9bfc-4464-b11e-76f210bf529b-kube-api-access-lm4f8\") pod \"calico-node-z4jtm\" (UID: \"e9d86c20-9bfc-4464-b11e-76f210bf529b\") " pod="calico-system/calico-node-z4jtm" Aug 13 07:10:23.729458 kubelet[2516]: E0813 07:10:23.729418 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.729458 kubelet[2516]: W0813 07:10:23.729442 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.729458 kubelet[2516]: E0813 07:10:23.729468 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.731794 kubelet[2516]: E0813 07:10:23.731769 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.731794 kubelet[2516]: W0813 07:10:23.731790 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.731866 kubelet[2516]: E0813 07:10:23.731810 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.735363 kubelet[2516]: E0813 07:10:23.735322 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.735363 kubelet[2516]: W0813 07:10:23.735351 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.735363 kubelet[2516]: E0813 07:10:23.735374 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.867574 kubelet[2516]: E0813 07:10:23.867458 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:23.883658 containerd[1471]: time="2025-08-13T07:10:23.883606712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4jtm,Uid:e9d86c20-9bfc-4464-b11e-76f210bf529b,Namespace:calico-system,Attempt:0,}" Aug 13 07:10:23.909180 containerd[1471]: time="2025-08-13T07:10:23.909032816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:23.909180 containerd[1471]: time="2025-08-13T07:10:23.909135039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:23.909180 containerd[1471]: time="2025-08-13T07:10:23.909147252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:23.909336 containerd[1471]: time="2025-08-13T07:10:23.909281264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:23.926513 kubelet[2516]: E0813 07:10:23.926474 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.926513 kubelet[2516]: W0813 07:10:23.926499 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.926646 kubelet[2516]: E0813 07:10:23.926523 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.926877 kubelet[2516]: E0813 07:10:23.926857 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.926877 kubelet[2516]: W0813 07:10:23.926869 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.926929 kubelet[2516]: E0813 07:10:23.926880 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.927268 kubelet[2516]: E0813 07:10:23.927239 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.927268 kubelet[2516]: W0813 07:10:23.927260 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.927268 kubelet[2516]: E0813 07:10:23.927270 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.927522 kubelet[2516]: E0813 07:10:23.927507 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.927522 kubelet[2516]: W0813 07:10:23.927518 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.927572 kubelet[2516]: E0813 07:10:23.927528 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.927777 kubelet[2516]: E0813 07:10:23.927765 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.927777 kubelet[2516]: W0813 07:10:23.927774 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.927817 kubelet[2516]: E0813 07:10:23.927782 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.928025 kubelet[2516]: E0813 07:10:23.928001 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.928025 kubelet[2516]: W0813 07:10:23.928012 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.928025 kubelet[2516]: E0813 07:10:23.928020 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.928225 systemd[1]: Started cri-containerd-ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393.scope - libcontainer container ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393. Aug 13 07:10:23.928500 kubelet[2516]: E0813 07:10:23.928256 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.928500 kubelet[2516]: W0813 07:10:23.928263 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.928500 kubelet[2516]: E0813 07:10:23.928271 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.928500 kubelet[2516]: E0813 07:10:23.928466 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.928500 kubelet[2516]: W0813 07:10:23.928476 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.928500 kubelet[2516]: E0813 07:10:23.928484 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.928777 kubelet[2516]: E0813 07:10:23.928709 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.928777 kubelet[2516]: W0813 07:10:23.928717 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.928777 kubelet[2516]: E0813 07:10:23.928725 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.929095 kubelet[2516]: E0813 07:10:23.929070 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.929095 kubelet[2516]: W0813 07:10:23.929082 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.929673 kubelet[2516]: E0813 07:10:23.929630 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.930076 kubelet[2516]: E0813 07:10:23.930052 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.930126 kubelet[2516]: W0813 07:10:23.930077 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.930126 kubelet[2516]: E0813 07:10:23.930098 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.931014 kubelet[2516]: E0813 07:10:23.930512 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.931014 kubelet[2516]: W0813 07:10:23.930533 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.931014 kubelet[2516]: E0813 07:10:23.930547 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.931109 kubelet[2516]: E0813 07:10:23.931098 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.931139 kubelet[2516]: W0813 07:10:23.931111 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.931181 kubelet[2516]: E0813 07:10:23.931157 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.931513 kubelet[2516]: E0813 07:10:23.931493 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.931513 kubelet[2516]: W0813 07:10:23.931510 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.931576 kubelet[2516]: E0813 07:10:23.931523 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.931905 kubelet[2516]: E0813 07:10:23.931885 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.931905 kubelet[2516]: W0813 07:10:23.931902 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.931964 kubelet[2516]: E0813 07:10:23.931916 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.932400 kubelet[2516]: E0813 07:10:23.932371 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.932442 kubelet[2516]: W0813 07:10:23.932389 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.932466 kubelet[2516]: E0813 07:10:23.932444 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.933333 kubelet[2516]: E0813 07:10:23.933302 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.933407 kubelet[2516]: W0813 07:10:23.933357 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.933407 kubelet[2516]: E0813 07:10:23.933386 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.933722 kubelet[2516]: E0813 07:10:23.933701 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.933722 kubelet[2516]: W0813 07:10:23.933718 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.933784 kubelet[2516]: E0813 07:10:23.933732 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.934017 kubelet[2516]: E0813 07:10:23.933995 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.934017 kubelet[2516]: W0813 07:10:23.934014 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.934086 kubelet[2516]: E0813 07:10:23.934028 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.934400 kubelet[2516]: E0813 07:10:23.934370 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.934425 kubelet[2516]: W0813 07:10:23.934389 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.934486 kubelet[2516]: E0813 07:10:23.934469 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.935127 kubelet[2516]: E0813 07:10:23.934946 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.935127 kubelet[2516]: W0813 07:10:23.934965 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.935127 kubelet[2516]: E0813 07:10:23.935007 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.935127 kubelet[2516]: I0813 07:10:23.935036 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56153e13-236a-410f-9ff5-c48bb048b643-kubelet-dir\") pod \"csi-node-driver-6twvn\" (UID: \"56153e13-236a-410f-9ff5-c48bb048b643\") " pod="calico-system/csi-node-driver-6twvn" Aug 13 07:10:23.935365 kubelet[2516]: E0813 07:10:23.935318 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.935396 kubelet[2516]: W0813 07:10:23.935364 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.935423 kubelet[2516]: E0813 07:10:23.935410 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.935469 kubelet[2516]: I0813 07:10:23.935451 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56153e13-236a-410f-9ff5-c48bb048b643-socket-dir\") pod \"csi-node-driver-6twvn\" (UID: \"56153e13-236a-410f-9ff5-c48bb048b643\") " pod="calico-system/csi-node-driver-6twvn" Aug 13 07:10:23.935794 kubelet[2516]: E0813 07:10:23.935777 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.935794 kubelet[2516]: W0813 07:10:23.935790 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.935857 kubelet[2516]: E0813 07:10:23.935807 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.935857 kubelet[2516]: I0813 07:10:23.935824 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjb8r\" (UniqueName: \"kubernetes.io/projected/56153e13-236a-410f-9ff5-c48bb048b643-kube-api-access-vjb8r\") pod \"csi-node-driver-6twvn\" (UID: \"56153e13-236a-410f-9ff5-c48bb048b643\") " pod="calico-system/csi-node-driver-6twvn" Aug 13 07:10:23.936291 kubelet[2516]: E0813 07:10:23.936084 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.936291 kubelet[2516]: W0813 07:10:23.936099 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.936291 kubelet[2516]: E0813 07:10:23.936117 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.936291 kubelet[2516]: I0813 07:10:23.936141 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56153e13-236a-410f-9ff5-c48bb048b643-registration-dir\") pod \"csi-node-driver-6twvn\" (UID: \"56153e13-236a-410f-9ff5-c48bb048b643\") " pod="calico-system/csi-node-driver-6twvn" Aug 13 07:10:23.936450 kubelet[2516]: E0813 07:10:23.936431 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.936450 kubelet[2516]: W0813 07:10:23.936446 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.936590 kubelet[2516]: E0813 07:10:23.936566 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.936675 kubelet[2516]: I0813 07:10:23.936645 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/56153e13-236a-410f-9ff5-c48bb048b643-varrun\") pod \"csi-node-driver-6twvn\" (UID: \"56153e13-236a-410f-9ff5-c48bb048b643\") " pod="calico-system/csi-node-driver-6twvn" Aug 13 07:10:23.936733 kubelet[2516]: E0813 07:10:23.936723 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.936759 kubelet[2516]: W0813 07:10:23.936736 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.936921 kubelet[2516]: E0813 07:10:23.936829 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.936961 kubelet[2516]: E0813 07:10:23.936946 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.936961 kubelet[2516]: W0813 07:10:23.936954 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.937151 kubelet[2516]: E0813 07:10:23.937063 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.937191 kubelet[2516]: E0813 07:10:23.937175 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.937191 kubelet[2516]: W0813 07:10:23.937184 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.937395 kubelet[2516]: E0813 07:10:23.937195 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.937552 kubelet[2516]: E0813 07:10:23.937534 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.937552 kubelet[2516]: W0813 07:10:23.937548 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.937607 kubelet[2516]: E0813 07:10:23.937564 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.937804 kubelet[2516]: E0813 07:10:23.937786 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.937804 kubelet[2516]: W0813 07:10:23.937799 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.937860 kubelet[2516]: E0813 07:10:23.937812 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.938140 kubelet[2516]: E0813 07:10:23.938124 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.938140 kubelet[2516]: W0813 07:10:23.938137 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.938204 kubelet[2516]: E0813 07:10:23.938147 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.938444 kubelet[2516]: E0813 07:10:23.938405 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.938444 kubelet[2516]: W0813 07:10:23.938418 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.938444 kubelet[2516]: E0813 07:10:23.938428 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.938680 kubelet[2516]: E0813 07:10:23.938646 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.938680 kubelet[2516]: W0813 07:10:23.938660 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.938729 kubelet[2516]: E0813 07:10:23.938709 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.939062 kubelet[2516]: E0813 07:10:23.939043 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.939107 kubelet[2516]: W0813 07:10:23.939086 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.939107 kubelet[2516]: E0813 07:10:23.939099 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.939373 kubelet[2516]: E0813 07:10:23.939355 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:23.939373 kubelet[2516]: W0813 07:10:23.939368 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:23.939433 kubelet[2516]: E0813 07:10:23.939377 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:23.957823 containerd[1471]: time="2025-08-13T07:10:23.957771447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4jtm,Uid:e9d86c20-9bfc-4464-b11e-76f210bf529b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393\"" Aug 13 07:10:24.037890 kubelet[2516]: E0813 07:10:24.037833 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.037890 kubelet[2516]: W0813 07:10:24.037864 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.037890 kubelet[2516]: E0813 07:10:24.037893 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.038289 kubelet[2516]: E0813 07:10:24.038244 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.038289 kubelet[2516]: W0813 07:10:24.038267 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.038289 kubelet[2516]: E0813 07:10:24.038283 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.038609 kubelet[2516]: E0813 07:10:24.038563 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.038609 kubelet[2516]: W0813 07:10:24.038591 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.038682 kubelet[2516]: E0813 07:10:24.038624 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.038932 kubelet[2516]: E0813 07:10:24.038910 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.038932 kubelet[2516]: W0813 07:10:24.038924 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.039035 kubelet[2516]: E0813 07:10:24.038939 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.039405 kubelet[2516]: E0813 07:10:24.039374 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.039405 kubelet[2516]: W0813 07:10:24.039388 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.039405 kubelet[2516]: E0813 07:10:24.039404 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.039702 kubelet[2516]: E0813 07:10:24.039675 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.039702 kubelet[2516]: W0813 07:10:24.039687 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.039702 kubelet[2516]: E0813 07:10:24.039727 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.039947 kubelet[2516]: E0813 07:10:24.039880 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.039947 kubelet[2516]: W0813 07:10:24.039888 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.040052 kubelet[2516]: E0813 07:10:24.040031 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.040147 kubelet[2516]: E0813 07:10:24.040135 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.040147 kubelet[2516]: W0813 07:10:24.040145 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.040267 kubelet[2516]: E0813 07:10:24.040157 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.040419 kubelet[2516]: E0813 07:10:24.040405 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.040473 kubelet[2516]: W0813 07:10:24.040417 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.040473 kubelet[2516]: E0813 07:10:24.040434 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.040852 kubelet[2516]: E0813 07:10:24.040833 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.040852 kubelet[2516]: W0813 07:10:24.040845 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.040941 kubelet[2516]: E0813 07:10:24.040861 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.041466 kubelet[2516]: E0813 07:10:24.041443 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.041466 kubelet[2516]: W0813 07:10:24.041458 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.041565 kubelet[2516]: E0813 07:10:24.041479 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.041772 kubelet[2516]: E0813 07:10:24.041756 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.041851 kubelet[2516]: W0813 07:10:24.041815 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.041957 kubelet[2516]: E0813 07:10:24.041926 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.042144 kubelet[2516]: E0813 07:10:24.042069 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.042144 kubelet[2516]: W0813 07:10:24.042078 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.042236 kubelet[2516]: E0813 07:10:24.042185 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.042322 kubelet[2516]: E0813 07:10:24.042274 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.042322 kubelet[2516]: W0813 07:10:24.042288 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.042322 kubelet[2516]: E0813 07:10:24.042303 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.042624 kubelet[2516]: E0813 07:10:24.042607 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.042624 kubelet[2516]: W0813 07:10:24.042620 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.042698 kubelet[2516]: E0813 07:10:24.042636 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.042948 kubelet[2516]: E0813 07:10:24.042931 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.042948 kubelet[2516]: W0813 07:10:24.042943 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.043038 kubelet[2516]: E0813 07:10:24.043004 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.043257 kubelet[2516]: E0813 07:10:24.043225 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.043257 kubelet[2516]: W0813 07:10:24.043238 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.043378 kubelet[2516]: E0813 07:10:24.043330 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.043476 kubelet[2516]: E0813 07:10:24.043458 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.043476 kubelet[2516]: W0813 07:10:24.043472 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.043578 kubelet[2516]: E0813 07:10:24.043551 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.043726 kubelet[2516]: E0813 07:10:24.043707 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.043726 kubelet[2516]: W0813 07:10:24.043723 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.043788 kubelet[2516]: E0813 07:10:24.043776 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.044053 kubelet[2516]: E0813 07:10:24.044037 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.044053 kubelet[2516]: W0813 07:10:24.044049 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.044118 kubelet[2516]: E0813 07:10:24.044065 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.044316 kubelet[2516]: E0813 07:10:24.044301 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.044316 kubelet[2516]: W0813 07:10:24.044312 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.044375 kubelet[2516]: E0813 07:10:24.044327 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.044560 kubelet[2516]: E0813 07:10:24.044543 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.044560 kubelet[2516]: W0813 07:10:24.044554 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.044613 kubelet[2516]: E0813 07:10:24.044567 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.044809 kubelet[2516]: E0813 07:10:24.044793 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.044809 kubelet[2516]: W0813 07:10:24.044804 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.044860 kubelet[2516]: E0813 07:10:24.044818 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.045200 kubelet[2516]: E0813 07:10:24.045069 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.045200 kubelet[2516]: W0813 07:10:24.045083 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.045200 kubelet[2516]: E0813 07:10:24.045093 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.045429 kubelet[2516]: E0813 07:10:24.045382 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.045429 kubelet[2516]: W0813 07:10:24.045394 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.045429 kubelet[2516]: E0813 07:10:24.045406 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:24.053759 kubelet[2516]: E0813 07:10:24.053731 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:24.053759 kubelet[2516]: W0813 07:10:24.053747 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:24.053759 kubelet[2516]: E0813 07:10:24.053758 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:25.603243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719734475.mount: Deactivated successfully. Aug 13 07:10:25.739457 kubelet[2516]: E0813 07:10:25.739386 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:27.291472 containerd[1471]: time="2025-08-13T07:10:27.291402309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:27.292621 containerd[1471]: time="2025-08-13T07:10:27.292532655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:10:27.294304 containerd[1471]: time="2025-08-13T07:10:27.294237680Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:27.297827 containerd[1471]: time="2025-08-13T07:10:27.297788918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:27.298450 containerd[1471]: time="2025-08-13T07:10:27.298417921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.689272173s" Aug 13 07:10:27.298498 containerd[1471]: time="2025-08-13T07:10:27.298457355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:10:27.300182 containerd[1471]: time="2025-08-13T07:10:27.299999155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:10:27.327248 containerd[1471]: time="2025-08-13T07:10:27.327188061Z" level=info msg="CreateContainer within sandbox \"4429161f47c58ddc2aab9cca0be12d501490fc281178b9b5edd9640f76c550fb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:10:27.343800 containerd[1471]: time="2025-08-13T07:10:27.343723053Z" level=info msg="CreateContainer within sandbox \"4429161f47c58ddc2aab9cca0be12d501490fc281178b9b5edd9640f76c550fb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d081b0b00eb575d2c036715c57509ca538e3a19a2b2cf48f37555d3db9fc089f\"" Aug 13 07:10:27.348029 containerd[1471]: time="2025-08-13T07:10:27.347882113Z" level=info msg="StartContainer for \"d081b0b00eb575d2c036715c57509ca538e3a19a2b2cf48f37555d3db9fc089f\"" Aug 13 07:10:27.381124 systemd[1]: Started cri-containerd-d081b0b00eb575d2c036715c57509ca538e3a19a2b2cf48f37555d3db9fc089f.scope - libcontainer container d081b0b00eb575d2c036715c57509ca538e3a19a2b2cf48f37555d3db9fc089f. Aug 13 07:10:27.433754 containerd[1471]: time="2025-08-13T07:10:27.433654640Z" level=info msg="StartContainer for \"d081b0b00eb575d2c036715c57509ca538e3a19a2b2cf48f37555d3db9fc089f\" returns successfully" Aug 13 07:10:27.741587 kubelet[2516]: E0813 07:10:27.740618 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:27.807401 kubelet[2516]: E0813 07:10:27.807355 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:27.815649 kubelet[2516]: I0813 07:10:27.815542 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67766fbd4d-qll64" podStartSLOduration=1.123099215 podStartE2EDuration="4.815389898s" podCreationTimestamp="2025-08-13 07:10:23 +0000 UTC" firstStartedPulling="2025-08-13 07:10:23.6073406 +0000 UTC m=+20.072246589" lastFinishedPulling="2025-08-13 07:10:27.299631273 +0000 UTC m=+23.764537272" observedRunningTime="2025-08-13 07:10:27.814787545 +0000 UTC m=+24.279693534" watchObservedRunningTime="2025-08-13 07:10:27.815389898 +0000 UTC m=+24.280295887" Aug 13 07:10:27.861449 kubelet[2516]: E0813 07:10:27.861400 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.861449 kubelet[2516]: W0813 07:10:27.861437 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.861649 kubelet[2516]: E0813 07:10:27.861471 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.861818 kubelet[2516]: E0813 07:10:27.861791 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.861818 kubelet[2516]: W0813 07:10:27.861809 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.861878 kubelet[2516]: E0813 07:10:27.861820 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.862179 kubelet[2516]: E0813 07:10:27.862142 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.862231 kubelet[2516]: W0813 07:10:27.862177 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.862231 kubelet[2516]: E0813 07:10:27.862211 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.862543 kubelet[2516]: E0813 07:10:27.862516 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.862543 kubelet[2516]: W0813 07:10:27.862529 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.862543 kubelet[2516]: E0813 07:10:27.862538 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.862896 kubelet[2516]: E0813 07:10:27.862870 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.862896 kubelet[2516]: W0813 07:10:27.862882 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.862896 kubelet[2516]: E0813 07:10:27.862891 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.863114 kubelet[2516]: E0813 07:10:27.863098 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.863114 kubelet[2516]: W0813 07:10:27.863112 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.863193 kubelet[2516]: E0813 07:10:27.863121 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.863348 kubelet[2516]: E0813 07:10:27.863324 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.863348 kubelet[2516]: W0813 07:10:27.863337 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.863348 kubelet[2516]: E0813 07:10:27.863345 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.863551 kubelet[2516]: E0813 07:10:27.863535 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.863551 kubelet[2516]: W0813 07:10:27.863546 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.863605 kubelet[2516]: E0813 07:10:27.863554 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.863823 kubelet[2516]: E0813 07:10:27.863796 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.863823 kubelet[2516]: W0813 07:10:27.863807 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.863823 kubelet[2516]: E0813 07:10:27.863815 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.864046 kubelet[2516]: E0813 07:10:27.864027 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.864046 kubelet[2516]: W0813 07:10:27.864038 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.864046 kubelet[2516]: E0813 07:10:27.864046 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.864305 kubelet[2516]: E0813 07:10:27.864286 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.864305 kubelet[2516]: W0813 07:10:27.864297 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.864305 kubelet[2516]: E0813 07:10:27.864305 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.864519 kubelet[2516]: E0813 07:10:27.864502 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.864519 kubelet[2516]: W0813 07:10:27.864513 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.864519 kubelet[2516]: E0813 07:10:27.864521 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.864757 kubelet[2516]: E0813 07:10:27.864739 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.864757 kubelet[2516]: W0813 07:10:27.864750 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.864757 kubelet[2516]: E0813 07:10:27.864758 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.865017 kubelet[2516]: E0813 07:10:27.864998 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.865017 kubelet[2516]: W0813 07:10:27.865010 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.865017 kubelet[2516]: E0813 07:10:27.865018 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.865243 kubelet[2516]: E0813 07:10:27.865226 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.865243 kubelet[2516]: W0813 07:10:27.865236 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.865295 kubelet[2516]: E0813 07:10:27.865253 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.868639 kubelet[2516]: E0813 07:10:27.868611 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.868639 kubelet[2516]: W0813 07:10:27.868630 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.868716 kubelet[2516]: E0813 07:10:27.868657 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.869028 kubelet[2516]: E0813 07:10:27.869008 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.869028 kubelet[2516]: W0813 07:10:27.869024 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.869099 kubelet[2516]: E0813 07:10:27.869044 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.869306 kubelet[2516]: E0813 07:10:27.869277 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.869306 kubelet[2516]: W0813 07:10:27.869295 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.869378 kubelet[2516]: E0813 07:10:27.869312 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.869550 kubelet[2516]: E0813 07:10:27.869528 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.869550 kubelet[2516]: W0813 07:10:27.869540 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.869601 kubelet[2516]: E0813 07:10:27.869553 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.869773 kubelet[2516]: E0813 07:10:27.869751 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.869773 kubelet[2516]: W0813 07:10:27.869763 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.869891 kubelet[2516]: E0813 07:10:27.869776 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.870047 kubelet[2516]: E0813 07:10:27.870032 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.870047 kubelet[2516]: W0813 07:10:27.870043 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.870100 kubelet[2516]: E0813 07:10:27.870057 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.870343 kubelet[2516]: E0813 07:10:27.870321 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.870399 kubelet[2516]: W0813 07:10:27.870343 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.870399 kubelet[2516]: E0813 07:10:27.870372 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.870640 kubelet[2516]: E0813 07:10:27.870618 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.870640 kubelet[2516]: W0813 07:10:27.870637 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.870705 kubelet[2516]: E0813 07:10:27.870659 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.870917 kubelet[2516]: E0813 07:10:27.870895 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.870917 kubelet[2516]: W0813 07:10:27.870913 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.870995 kubelet[2516]: E0813 07:10:27.870935 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.871258 kubelet[2516]: E0813 07:10:27.871234 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.871258 kubelet[2516]: W0813 07:10:27.871253 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.871319 kubelet[2516]: E0813 07:10:27.871299 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.871472 kubelet[2516]: E0813 07:10:27.871457 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.871472 kubelet[2516]: W0813 07:10:27.871468 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.871526 kubelet[2516]: E0813 07:10:27.871483 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.871702 kubelet[2516]: E0813 07:10:27.871687 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.871702 kubelet[2516]: W0813 07:10:27.871698 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.871763 kubelet[2516]: E0813 07:10:27.871711 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.871915 kubelet[2516]: E0813 07:10:27.871899 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.871915 kubelet[2516]: W0813 07:10:27.871910 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.871969 kubelet[2516]: E0813 07:10:27.871925 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.872182 kubelet[2516]: E0813 07:10:27.872166 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.872182 kubelet[2516]: W0813 07:10:27.872177 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.872238 kubelet[2516]: E0813 07:10:27.872191 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.872454 kubelet[2516]: E0813 07:10:27.872438 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.872481 kubelet[2516]: W0813 07:10:27.872453 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.872481 kubelet[2516]: E0813 07:10:27.872471 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.872720 kubelet[2516]: E0813 07:10:27.872708 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.872720 kubelet[2516]: W0813 07:10:27.872717 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.872770 kubelet[2516]: E0813 07:10:27.872731 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.873091 kubelet[2516]: E0813 07:10:27.873070 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.873121 kubelet[2516]: W0813 07:10:27.873089 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.873121 kubelet[2516]: E0813 07:10:27.873109 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:27.873390 kubelet[2516]: E0813 07:10:27.873372 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:27.873390 kubelet[2516]: W0813 07:10:27.873387 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:27.873438 kubelet[2516]: E0813 07:10:27.873400 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.804420 kubelet[2516]: E0813 07:10:28.804367 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:28.872488 kubelet[2516]: E0813 07:10:28.872450 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.872488 kubelet[2516]: W0813 07:10:28.872475 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.872707 kubelet[2516]: E0813 07:10:28.872501 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.872878 kubelet[2516]: E0813 07:10:28.872850 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.872878 kubelet[2516]: W0813 07:10:28.872864 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.872878 kubelet[2516]: E0813 07:10:28.872876 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.873172 kubelet[2516]: E0813 07:10:28.873153 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.873172 kubelet[2516]: W0813 07:10:28.873167 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.873275 kubelet[2516]: E0813 07:10:28.873178 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.873427 kubelet[2516]: E0813 07:10:28.873411 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.873427 kubelet[2516]: W0813 07:10:28.873423 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.873501 kubelet[2516]: E0813 07:10:28.873432 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.873656 kubelet[2516]: E0813 07:10:28.873632 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.873656 kubelet[2516]: W0813 07:10:28.873650 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.873739 kubelet[2516]: E0813 07:10:28.873659 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.873875 kubelet[2516]: E0813 07:10:28.873861 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.873875 kubelet[2516]: W0813 07:10:28.873871 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.873944 kubelet[2516]: E0813 07:10:28.873882 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.874145 kubelet[2516]: E0813 07:10:28.874123 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.874145 kubelet[2516]: W0813 07:10:28.874138 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.874269 kubelet[2516]: E0813 07:10:28.874153 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.874447 kubelet[2516]: E0813 07:10:28.874427 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.874447 kubelet[2516]: W0813 07:10:28.874441 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.874523 kubelet[2516]: E0813 07:10:28.874453 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.874712 kubelet[2516]: E0813 07:10:28.874682 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.874712 kubelet[2516]: W0813 07:10:28.874695 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.874712 kubelet[2516]: E0813 07:10:28.874708 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.874930 kubelet[2516]: E0813 07:10:28.874911 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.874930 kubelet[2516]: W0813 07:10:28.874923 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.875049 kubelet[2516]: E0813 07:10:28.874935 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.875175 kubelet[2516]: E0813 07:10:28.875156 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.875175 kubelet[2516]: W0813 07:10:28.875169 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.875278 kubelet[2516]: E0813 07:10:28.875180 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.875415 kubelet[2516]: E0813 07:10:28.875396 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.875415 kubelet[2516]: W0813 07:10:28.875408 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.875480 kubelet[2516]: E0813 07:10:28.875419 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.875666 kubelet[2516]: E0813 07:10:28.875648 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.875666 kubelet[2516]: W0813 07:10:28.875660 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.875750 kubelet[2516]: E0813 07:10:28.875670 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.875894 kubelet[2516]: E0813 07:10:28.875876 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.875894 kubelet[2516]: W0813 07:10:28.875887 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.876007 kubelet[2516]: E0813 07:10:28.875898 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.876167 kubelet[2516]: E0813 07:10:28.876148 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.876167 kubelet[2516]: W0813 07:10:28.876163 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.876265 kubelet[2516]: E0813 07:10:28.876175 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.876517 kubelet[2516]: E0813 07:10:28.876497 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.876517 kubelet[2516]: W0813 07:10:28.876510 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.876603 kubelet[2516]: E0813 07:10:28.876522 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.876777 kubelet[2516]: E0813 07:10:28.876761 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.876777 kubelet[2516]: W0813 07:10:28.876773 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.876863 kubelet[2516]: E0813 07:10:28.876791 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.877081 kubelet[2516]: E0813 07:10:28.877064 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.877081 kubelet[2516]: W0813 07:10:28.877077 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.877183 kubelet[2516]: E0813 07:10:28.877094 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.877387 kubelet[2516]: E0813 07:10:28.877368 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.877387 kubelet[2516]: W0813 07:10:28.877380 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.877463 kubelet[2516]: E0813 07:10:28.877398 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.877729 kubelet[2516]: E0813 07:10:28.877685 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.877729 kubelet[2516]: W0813 07:10:28.877718 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.877789 kubelet[2516]: E0813 07:10:28.877749 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.878045 kubelet[2516]: E0813 07:10:28.877965 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.878045 kubelet[2516]: W0813 07:10:28.877999 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.878045 kubelet[2516]: E0813 07:10:28.878015 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.878273 kubelet[2516]: E0813 07:10:28.878253 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.878273 kubelet[2516]: W0813 07:10:28.878264 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.878351 kubelet[2516]: E0813 07:10:28.878292 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.878495 kubelet[2516]: E0813 07:10:28.878476 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.878495 kubelet[2516]: W0813 07:10:28.878487 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.878567 kubelet[2516]: E0813 07:10:28.878520 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.878732 kubelet[2516]: E0813 07:10:28.878713 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.878732 kubelet[2516]: W0813 07:10:28.878724 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.878802 kubelet[2516]: E0813 07:10:28.878749 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.879002 kubelet[2516]: E0813 07:10:28.878962 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.879002 kubelet[2516]: W0813 07:10:28.878972 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.879002 kubelet[2516]: E0813 07:10:28.879005 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.879217 kubelet[2516]: E0813 07:10:28.879200 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.879217 kubelet[2516]: W0813 07:10:28.879211 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.879291 kubelet[2516]: E0813 07:10:28.879232 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.879423 kubelet[2516]: E0813 07:10:28.879408 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.879423 kubelet[2516]: W0813 07:10:28.879417 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.879495 kubelet[2516]: E0813 07:10:28.879430 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.879658 kubelet[2516]: E0813 07:10:28.879642 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.879658 kubelet[2516]: W0813 07:10:28.879652 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.879727 kubelet[2516]: E0813 07:10:28.879664 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.880040 kubelet[2516]: E0813 07:10:28.880019 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.880040 kubelet[2516]: W0813 07:10:28.880035 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.880141 kubelet[2516]: E0813 07:10:28.880054 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.880352 kubelet[2516]: E0813 07:10:28.880332 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.880352 kubelet[2516]: W0813 07:10:28.880345 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.880352 kubelet[2516]: E0813 07:10:28.880363 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.880656 kubelet[2516]: E0813 07:10:28.880639 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.880656 kubelet[2516]: W0813 07:10:28.880651 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.880747 kubelet[2516]: E0813 07:10:28.880664 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.880882 kubelet[2516]: E0813 07:10:28.880866 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.880882 kubelet[2516]: W0813 07:10:28.880876 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.880945 kubelet[2516]: E0813 07:10:28.880887 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:28.881116 kubelet[2516]: E0813 07:10:28.881102 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:28.881116 kubelet[2516]: W0813 07:10:28.881114 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:28.881179 kubelet[2516]: E0813 07:10:28.881122 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.739081 kubelet[2516]: E0813 07:10:29.739009 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:29.806341 kubelet[2516]: E0813 07:10:29.806287 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:29.885185 kubelet[2516]: E0813 07:10:29.885127 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.885185 kubelet[2516]: W0813 07:10:29.885159 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.885185 kubelet[2516]: E0813 07:10:29.885185 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.885470 kubelet[2516]: E0813 07:10:29.885442 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.885470 kubelet[2516]: W0813 07:10:29.885454 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.885470 kubelet[2516]: E0813 07:10:29.885463 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.885706 kubelet[2516]: E0813 07:10:29.885679 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.885706 kubelet[2516]: W0813 07:10:29.885691 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.885706 kubelet[2516]: E0813 07:10:29.885699 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.885913 kubelet[2516]: E0813 07:10:29.885887 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.885913 kubelet[2516]: W0813 07:10:29.885899 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.885913 kubelet[2516]: E0813 07:10:29.885907 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.886144 kubelet[2516]: E0813 07:10:29.886119 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.886144 kubelet[2516]: W0813 07:10:29.886133 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.886144 kubelet[2516]: E0813 07:10:29.886141 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.886372 kubelet[2516]: E0813 07:10:29.886343 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.886372 kubelet[2516]: W0813 07:10:29.886355 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.886372 kubelet[2516]: E0813 07:10:29.886367 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.886572 kubelet[2516]: E0813 07:10:29.886554 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.886572 kubelet[2516]: W0813 07:10:29.886565 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.886621 kubelet[2516]: E0813 07:10:29.886573 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.886833 kubelet[2516]: E0813 07:10:29.886815 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.886833 kubelet[2516]: W0813 07:10:29.886826 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.886880 kubelet[2516]: E0813 07:10:29.886834 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.887140 kubelet[2516]: E0813 07:10:29.887123 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.887140 kubelet[2516]: W0813 07:10:29.887134 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.887274 kubelet[2516]: E0813 07:10:29.887143 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.887353 kubelet[2516]: E0813 07:10:29.887339 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.887353 kubelet[2516]: W0813 07:10:29.887349 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.887394 kubelet[2516]: E0813 07:10:29.887358 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.887553 kubelet[2516]: E0813 07:10:29.887540 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.887553 kubelet[2516]: W0813 07:10:29.887550 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.887606 kubelet[2516]: E0813 07:10:29.887558 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.887754 kubelet[2516]: E0813 07:10:29.887741 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.887754 kubelet[2516]: W0813 07:10:29.887751 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.887799 kubelet[2516]: E0813 07:10:29.887758 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.887960 kubelet[2516]: E0813 07:10:29.887946 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.887960 kubelet[2516]: W0813 07:10:29.887957 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.888027 kubelet[2516]: E0813 07:10:29.887965 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.888189 kubelet[2516]: E0813 07:10:29.888174 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.888189 kubelet[2516]: W0813 07:10:29.888185 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.888241 kubelet[2516]: E0813 07:10:29.888193 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.888404 kubelet[2516]: E0813 07:10:29.888391 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.888404 kubelet[2516]: W0813 07:10:29.888401 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.888459 kubelet[2516]: E0813 07:10:29.888409 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.984791 kubelet[2516]: E0813 07:10:29.984745 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.984791 kubelet[2516]: W0813 07:10:29.984770 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.984791 kubelet[2516]: E0813 07:10:29.984792 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.985175 kubelet[2516]: E0813 07:10:29.985155 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.985175 kubelet[2516]: W0813 07:10:29.985167 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.985175 kubelet[2516]: E0813 07:10:29.985183 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.985491 kubelet[2516]: E0813 07:10:29.985470 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.985491 kubelet[2516]: W0813 07:10:29.985481 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.985551 kubelet[2516]: E0813 07:10:29.985495 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.985812 kubelet[2516]: E0813 07:10:29.985776 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.985812 kubelet[2516]: W0813 07:10:29.985800 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.985871 kubelet[2516]: E0813 07:10:29.985824 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.986071 kubelet[2516]: E0813 07:10:29.986048 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.986071 kubelet[2516]: W0813 07:10:29.986060 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.986143 kubelet[2516]: E0813 07:10:29.986078 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.986741 kubelet[2516]: E0813 07:10:29.986349 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.986741 kubelet[2516]: W0813 07:10:29.986362 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.986741 kubelet[2516]: E0813 07:10:29.986377 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.986741 kubelet[2516]: E0813 07:10:29.986662 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.986741 kubelet[2516]: W0813 07:10:29.986672 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.986945 kubelet[2516]: E0813 07:10:29.986763 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.987162 kubelet[2516]: E0813 07:10:29.987144 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.987162 kubelet[2516]: W0813 07:10:29.987159 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.987249 kubelet[2516]: E0813 07:10:29.987201 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.987408 kubelet[2516]: E0813 07:10:29.987395 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.987408 kubelet[2516]: W0813 07:10:29.987405 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.987460 kubelet[2516]: E0813 07:10:29.987431 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.987608 kubelet[2516]: E0813 07:10:29.987596 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.987608 kubelet[2516]: W0813 07:10:29.987606 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.987654 kubelet[2516]: E0813 07:10:29.987621 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.987877 kubelet[2516]: E0813 07:10:29.987845 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.987877 kubelet[2516]: W0813 07:10:29.987856 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.987877 kubelet[2516]: E0813 07:10:29.987869 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.988091 kubelet[2516]: E0813 07:10:29.988076 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.988091 kubelet[2516]: W0813 07:10:29.988087 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.988160 kubelet[2516]: E0813 07:10:29.988101 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.988354 kubelet[2516]: E0813 07:10:29.988333 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.988354 kubelet[2516]: W0813 07:10:29.988349 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.988406 kubelet[2516]: E0813 07:10:29.988366 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.988578 kubelet[2516]: E0813 07:10:29.988564 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.988578 kubelet[2516]: W0813 07:10:29.988575 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.988628 kubelet[2516]: E0813 07:10:29.988588 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.988871 kubelet[2516]: E0813 07:10:29.988842 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.988904 kubelet[2516]: W0813 07:10:29.988869 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.988937 kubelet[2516]: E0813 07:10:29.988900 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.989316 kubelet[2516]: E0813 07:10:29.989224 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.989316 kubelet[2516]: W0813 07:10:29.989248 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.989316 kubelet[2516]: E0813 07:10:29.989262 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.989534 kubelet[2516]: E0813 07:10:29.989506 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.989534 kubelet[2516]: W0813 07:10:29.989522 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.989534 kubelet[2516]: E0813 07:10:29.989542 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:29.989836 kubelet[2516]: E0813 07:10:29.989821 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:10:29.989836 kubelet[2516]: W0813 07:10:29.989833 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:10:29.989916 kubelet[2516]: E0813 07:10:29.989844 2516 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:10:31.142940 containerd[1471]: time="2025-08-13T07:10:31.142854664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:31.171954 containerd[1471]: time="2025-08-13T07:10:31.171863153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:10:31.206210 containerd[1471]: time="2025-08-13T07:10:31.206143810Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:31.208288 containerd[1471]: time="2025-08-13T07:10:31.208250849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:31.209001 containerd[1471]: time="2025-08-13T07:10:31.208939803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 3.908908609s" Aug 13 07:10:31.209001 containerd[1471]: time="2025-08-13T07:10:31.208999104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:10:31.230700 containerd[1471]: time="2025-08-13T07:10:31.230646462Z" level=info msg="CreateContainer within sandbox \"ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:10:31.245910 containerd[1471]: time="2025-08-13T07:10:31.245850152Z" level=info msg="CreateContainer within sandbox \"ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b\"" Aug 13 07:10:31.246385 containerd[1471]: time="2025-08-13T07:10:31.246355952Z" level=info msg="StartContainer for \"a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b\"" Aug 13 07:10:31.278209 systemd[1]: Started cri-containerd-a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b.scope - libcontainer container a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b. Aug 13 07:10:31.307865 containerd[1471]: time="2025-08-13T07:10:31.307813376Z" level=info msg="StartContainer for \"a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b\" returns successfully" Aug 13 07:10:31.321939 systemd[1]: cri-containerd-a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b.scope: Deactivated successfully. Aug 13 07:10:31.347910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b-rootfs.mount: Deactivated successfully. Aug 13 07:10:31.735275 containerd[1471]: time="2025-08-13T07:10:31.735202962Z" level=info msg="shim disconnected" id=a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b namespace=k8s.io Aug 13 07:10:31.735275 containerd[1471]: time="2025-08-13T07:10:31.735269868Z" level=warning msg="cleaning up after shim disconnected" id=a2b13d121a191029910ca226c3b2420f161d41facf0161223d3aaab7c5449c9b namespace=k8s.io Aug 13 07:10:31.735275 containerd[1471]: time="2025-08-13T07:10:31.735278554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:31.746303 kubelet[2516]: E0813 07:10:31.745958 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:31.811887 containerd[1471]: time="2025-08-13T07:10:31.811837867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:10:33.739309 kubelet[2516]: E0813 07:10:33.739250 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:35.287199 containerd[1471]: time="2025-08-13T07:10:35.287136099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:35.406725 containerd[1471]: time="2025-08-13T07:10:35.406643324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:10:35.495375 containerd[1471]: time="2025-08-13T07:10:35.495311119Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:35.610442 containerd[1471]: time="2025-08-13T07:10:35.610241107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:35.611341 containerd[1471]: time="2025-08-13T07:10:35.611256794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.799374854s" Aug 13 07:10:35.611341 containerd[1471]: time="2025-08-13T07:10:35.611306097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:10:35.613724 containerd[1471]: time="2025-08-13T07:10:35.613630753Z" level=info msg="CreateContainer within sandbox \"ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:10:35.739378 kubelet[2516]: E0813 07:10:35.739320 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:35.978532 containerd[1471]: time="2025-08-13T07:10:35.978397153Z" level=info msg="CreateContainer within sandbox \"ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6\"" Aug 13 07:10:35.979141 containerd[1471]: time="2025-08-13T07:10:35.979110744Z" level=info msg="StartContainer for \"0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6\"" Aug 13 07:10:36.016152 systemd[1]: Started cri-containerd-0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6.scope - libcontainer container 0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6. Aug 13 07:10:36.914801 containerd[1471]: time="2025-08-13T07:10:36.914738633Z" level=info msg="StartContainer for \"0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6\" returns successfully" Aug 13 07:10:37.739045 kubelet[2516]: E0813 07:10:37.738962 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:38.265161 systemd[1]: cri-containerd-0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6.scope: Deactivated successfully. Aug 13 07:10:38.274895 kubelet[2516]: I0813 07:10:38.273787 2516 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:10:38.288666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6-rootfs.mount: Deactivated successfully. Aug 13 07:10:38.297707 containerd[1471]: time="2025-08-13T07:10:38.297337752Z" level=info msg="shim disconnected" id=0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6 namespace=k8s.io Aug 13 07:10:38.297707 containerd[1471]: time="2025-08-13T07:10:38.297540373Z" level=warning msg="cleaning up after shim disconnected" id=0718ee0b1cf59c0f7c5e031d43c33b02f4e261b4f571f2fc96f1e01560894be6 namespace=k8s.io Aug 13 07:10:38.297707 containerd[1471]: time="2025-08-13T07:10:38.297556403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:38.314948 systemd[1]: Created slice kubepods-burstable-pod1735f9ec_98da_41a0_9a73_36a5bf3e9daf.slice - libcontainer container kubepods-burstable-pod1735f9ec_98da_41a0_9a73_36a5bf3e9daf.slice. Aug 13 07:10:38.323762 systemd[1]: Created slice kubepods-besteffort-podf33e7410_3fa9_4e29_b769_12773c11a6bf.slice - libcontainer container kubepods-besteffort-podf33e7410_3fa9_4e29_b769_12773c11a6bf.slice. Aug 13 07:10:38.329216 systemd[1]: Created slice kubepods-besteffort-podc48322e9_ac5f_466c_9de8_c661562c7150.slice - libcontainer container kubepods-besteffort-podc48322e9_ac5f_466c_9de8_c661562c7150.slice. Aug 13 07:10:38.335282 systemd[1]: Created slice kubepods-burstable-pod89d9471b_7057_4239_b0e9_0c59341f2450.slice - libcontainer container kubepods-burstable-pod89d9471b_7057_4239_b0e9_0c59341f2450.slice. Aug 13 07:10:38.341368 systemd[1]: Created slice kubepods-besteffort-pod8e4cc1e4_02e4_465a_8bb4_0feae995f17b.slice - libcontainer container kubepods-besteffort-pod8e4cc1e4_02e4_465a_8bb4_0feae995f17b.slice. Aug 13 07:10:38.345704 systemd[1]: Created slice kubepods-besteffort-pod4bdd3ae4_af36_4635_bf89_707fda641eeb.slice - libcontainer container kubepods-besteffort-pod4bdd3ae4_af36_4635_bf89_707fda641eeb.slice. Aug 13 07:10:38.350696 systemd[1]: Created slice kubepods-besteffort-pod0e76a2a9_0e2c_4ebb_b5c0_9180a06395b3.slice - libcontainer container kubepods-besteffort-pod0e76a2a9_0e2c_4ebb_b5c0_9180a06395b3.slice. Aug 13 07:10:38.438784 kubelet[2516]: I0813 07:10:38.438692 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-ca-bundle\") pod \"whisker-6dd6485567-mjgd8\" (UID: \"c48322e9-ac5f-466c-9de8-c661562c7150\") " pod="calico-system/whisker-6dd6485567-mjgd8" Aug 13 07:10:38.438784 kubelet[2516]: I0813 07:10:38.438741 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e4cc1e4-02e4-465a-8bb4-0feae995f17b-calico-apiserver-certs\") pod \"calico-apiserver-7cd8575958-jsmst\" (UID: \"8e4cc1e4-02e4-465a-8bb4-0feae995f17b\") " pod="calico-apiserver/calico-apiserver-7cd8575958-jsmst" Aug 13 07:10:38.438784 kubelet[2516]: I0813 07:10:38.438760 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3-tigera-ca-bundle\") pod \"calico-kube-controllers-54495f4bd9-bw4q4\" (UID: \"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3\") " pod="calico-system/calico-kube-controllers-54495f4bd9-bw4q4" Aug 13 07:10:38.438784 kubelet[2516]: I0813 07:10:38.438780 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxdr2\" (UniqueName: \"kubernetes.io/projected/4bdd3ae4-af36-4635-bf89-707fda641eeb-kube-api-access-nxdr2\") pod \"calico-apiserver-7cd8575958-vfc6n\" (UID: \"4bdd3ae4-af36-4635-bf89-707fda641eeb\") " pod="calico-apiserver/calico-apiserver-7cd8575958-vfc6n" Aug 13 07:10:38.438784 kubelet[2516]: I0813 07:10:38.438796 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c4kw\" (UniqueName: \"kubernetes.io/projected/c48322e9-ac5f-466c-9de8-c661562c7150-kube-api-access-9c4kw\") pod \"whisker-6dd6485567-mjgd8\" (UID: \"c48322e9-ac5f-466c-9de8-c661562c7150\") " pod="calico-system/whisker-6dd6485567-mjgd8" Aug 13 07:10:38.439169 kubelet[2516]: I0813 07:10:38.438813 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhhhl\" (UniqueName: \"kubernetes.io/projected/8e4cc1e4-02e4-465a-8bb4-0feae995f17b-kube-api-access-dhhhl\") pod \"calico-apiserver-7cd8575958-jsmst\" (UID: \"8e4cc1e4-02e4-465a-8bb4-0feae995f17b\") " pod="calico-apiserver/calico-apiserver-7cd8575958-jsmst" Aug 13 07:10:38.439169 kubelet[2516]: I0813 07:10:38.438829 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmdmv\" (UniqueName: \"kubernetes.io/projected/0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3-kube-api-access-qmdmv\") pod \"calico-kube-controllers-54495f4bd9-bw4q4\" (UID: \"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3\") " pod="calico-system/calico-kube-controllers-54495f4bd9-bw4q4" Aug 13 07:10:38.439169 kubelet[2516]: I0813 07:10:38.438888 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-backend-key-pair\") pod \"whisker-6dd6485567-mjgd8\" (UID: \"c48322e9-ac5f-466c-9de8-c661562c7150\") " pod="calico-system/whisker-6dd6485567-mjgd8" Aug 13 07:10:38.439169 kubelet[2516]: I0813 07:10:38.438930 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzgv\" (UniqueName: \"kubernetes.io/projected/f33e7410-3fa9-4e29-b769-12773c11a6bf-kube-api-access-wxzgv\") pod \"goldmane-58fd7646b9-n4sgd\" (UID: \"f33e7410-3fa9-4e29-b769-12773c11a6bf\") " pod="calico-system/goldmane-58fd7646b9-n4sgd" Aug 13 07:10:38.439169 kubelet[2516]: I0813 07:10:38.438962 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bdd3ae4-af36-4635-bf89-707fda641eeb-calico-apiserver-certs\") pod \"calico-apiserver-7cd8575958-vfc6n\" (UID: \"4bdd3ae4-af36-4635-bf89-707fda641eeb\") " pod="calico-apiserver/calico-apiserver-7cd8575958-vfc6n" Aug 13 07:10:38.439300 kubelet[2516]: I0813 07:10:38.439027 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfr5w\" (UniqueName: \"kubernetes.io/projected/89d9471b-7057-4239-b0e9-0c59341f2450-kube-api-access-rfr5w\") pod \"coredns-7c65d6cfc9-g7nzk\" (UID: \"89d9471b-7057-4239-b0e9-0c59341f2450\") " pod="kube-system/coredns-7c65d6cfc9-g7nzk" Aug 13 07:10:38.439300 kubelet[2516]: I0813 07:10:38.439051 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1735f9ec-98da-41a0-9a73-36a5bf3e9daf-config-volume\") pod \"coredns-7c65d6cfc9-gbvdn\" (UID: \"1735f9ec-98da-41a0-9a73-36a5bf3e9daf\") " pod="kube-system/coredns-7c65d6cfc9-gbvdn" Aug 13 07:10:38.439300 kubelet[2516]: I0813 07:10:38.439069 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f33e7410-3fa9-4e29-b769-12773c11a6bf-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-n4sgd\" (UID: \"f33e7410-3fa9-4e29-b769-12773c11a6bf\") " pod="calico-system/goldmane-58fd7646b9-n4sgd" Aug 13 07:10:38.439300 kubelet[2516]: I0813 07:10:38.439089 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89d9471b-7057-4239-b0e9-0c59341f2450-config-volume\") pod \"coredns-7c65d6cfc9-g7nzk\" (UID: \"89d9471b-7057-4239-b0e9-0c59341f2450\") " pod="kube-system/coredns-7c65d6cfc9-g7nzk" Aug 13 07:10:38.439300 kubelet[2516]: I0813 07:10:38.439105 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcfcv\" (UniqueName: \"kubernetes.io/projected/1735f9ec-98da-41a0-9a73-36a5bf3e9daf-kube-api-access-gcfcv\") pod \"coredns-7c65d6cfc9-gbvdn\" (UID: \"1735f9ec-98da-41a0-9a73-36a5bf3e9daf\") " pod="kube-system/coredns-7c65d6cfc9-gbvdn" Aug 13 07:10:38.439479 kubelet[2516]: I0813 07:10:38.439127 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e7410-3fa9-4e29-b769-12773c11a6bf-config\") pod \"goldmane-58fd7646b9-n4sgd\" (UID: \"f33e7410-3fa9-4e29-b769-12773c11a6bf\") " pod="calico-system/goldmane-58fd7646b9-n4sgd" Aug 13 07:10:38.439479 kubelet[2516]: I0813 07:10:38.439155 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f33e7410-3fa9-4e29-b769-12773c11a6bf-goldmane-key-pair\") pod \"goldmane-58fd7646b9-n4sgd\" (UID: \"f33e7410-3fa9-4e29-b769-12773c11a6bf\") " pod="calico-system/goldmane-58fd7646b9-n4sgd" Aug 13 07:10:38.618627 kubelet[2516]: E0813 07:10:38.618574 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:38.619337 containerd[1471]: time="2025-08-13T07:10:38.619221902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gbvdn,Uid:1735f9ec-98da-41a0-9a73-36a5bf3e9daf,Namespace:kube-system,Attempt:0,}" Aug 13 07:10:38.628535 containerd[1471]: time="2025-08-13T07:10:38.628491506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n4sgd,Uid:f33e7410-3fa9-4e29-b769-12773c11a6bf,Namespace:calico-system,Attempt:0,}" Aug 13 07:10:38.633098 containerd[1471]: time="2025-08-13T07:10:38.633060734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd6485567-mjgd8,Uid:c48322e9-ac5f-466c-9de8-c661562c7150,Namespace:calico-system,Attempt:0,}" Aug 13 07:10:38.638670 kubelet[2516]: E0813 07:10:38.638614 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:38.639154 containerd[1471]: time="2025-08-13T07:10:38.639086437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7nzk,Uid:89d9471b-7057-4239-b0e9-0c59341f2450,Namespace:kube-system,Attempt:0,}" Aug 13 07:10:38.644732 containerd[1471]: time="2025-08-13T07:10:38.644673898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-jsmst,Uid:8e4cc1e4-02e4-465a-8bb4-0feae995f17b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:10:38.648678 containerd[1471]: time="2025-08-13T07:10:38.648648130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-vfc6n,Uid:4bdd3ae4-af36-4635-bf89-707fda641eeb,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:10:38.654135 containerd[1471]: time="2025-08-13T07:10:38.654104323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54495f4bd9-bw4q4,Uid:0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3,Namespace:calico-system,Attempt:0,}" Aug 13 07:10:38.761463 containerd[1471]: time="2025-08-13T07:10:38.761280033Z" level=error msg="Failed to destroy network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.766649 containerd[1471]: time="2025-08-13T07:10:38.766587197Z" level=error msg="Failed to destroy network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.768489 containerd[1471]: time="2025-08-13T07:10:38.768452991Z" level=error msg="encountered an error cleaning up failed sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.768558 containerd[1471]: time="2025-08-13T07:10:38.768527220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n4sgd,Uid:f33e7410-3fa9-4e29-b769-12773c11a6bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.774078 containerd[1471]: time="2025-08-13T07:10:38.774029099Z" level=error msg="encountered an error cleaning up failed sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.774140 containerd[1471]: time="2025-08-13T07:10:38.774106274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gbvdn,Uid:1735f9ec-98da-41a0-9a73-36a5bf3e9daf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.785056 kubelet[2516]: E0813 07:10:38.784609 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.785056 kubelet[2516]: E0813 07:10:38.784692 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gbvdn" Aug 13 07:10:38.785056 kubelet[2516]: E0813 07:10:38.784716 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gbvdn" Aug 13 07:10:38.785565 kubelet[2516]: E0813 07:10:38.784757 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gbvdn_kube-system(1735f9ec-98da-41a0-9a73-36a5bf3e9daf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gbvdn_kube-system(1735f9ec-98da-41a0-9a73-36a5bf3e9daf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gbvdn" podUID="1735f9ec-98da-41a0-9a73-36a5bf3e9daf" Aug 13 07:10:38.785565 kubelet[2516]: E0813 07:10:38.784609 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.785565 kubelet[2516]: E0813 07:10:38.784844 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n4sgd" Aug 13 07:10:38.785683 kubelet[2516]: E0813 07:10:38.784869 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n4sgd" Aug 13 07:10:38.785683 kubelet[2516]: E0813 07:10:38.784905 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-n4sgd_calico-system(f33e7410-3fa9-4e29-b769-12773c11a6bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-n4sgd_calico-system(f33e7410-3fa9-4e29-b769-12773c11a6bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-n4sgd" podUID="f33e7410-3fa9-4e29-b769-12773c11a6bf" Aug 13 07:10:38.796791 containerd[1471]: time="2025-08-13T07:10:38.796707568Z" level=error msg="Failed to destroy network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.797260 containerd[1471]: time="2025-08-13T07:10:38.797204150Z" level=error msg="encountered an error cleaning up failed sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.797325 containerd[1471]: time="2025-08-13T07:10:38.797280473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd6485567-mjgd8,Uid:c48322e9-ac5f-466c-9de8-c661562c7150,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.797719 kubelet[2516]: E0813 07:10:38.797645 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.797719 kubelet[2516]: E0813 07:10:38.797721 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd6485567-mjgd8" Aug 13 07:10:38.797906 kubelet[2516]: E0813 07:10:38.797742 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd6485567-mjgd8" Aug 13 07:10:38.797906 kubelet[2516]: E0813 07:10:38.797791 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd6485567-mjgd8_calico-system(c48322e9-ac5f-466c-9de8-c661562c7150)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd6485567-mjgd8_calico-system(c48322e9-ac5f-466c-9de8-c661562c7150)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd6485567-mjgd8" podUID="c48322e9-ac5f-466c-9de8-c661562c7150" Aug 13 07:10:38.800847 containerd[1471]: time="2025-08-13T07:10:38.800772990Z" level=error msg="Failed to destroy network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.801307 containerd[1471]: time="2025-08-13T07:10:38.801268841Z" level=error msg="encountered an error cleaning up failed sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.801348 containerd[1471]: time="2025-08-13T07:10:38.801330557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-vfc6n,Uid:4bdd3ae4-af36-4635-bf89-707fda641eeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.801755 kubelet[2516]: E0813 07:10:38.801563 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.801755 kubelet[2516]: E0813 07:10:38.801635 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd8575958-vfc6n" Aug 13 07:10:38.801755 kubelet[2516]: E0813 07:10:38.801661 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd8575958-vfc6n" Aug 13 07:10:38.802463 kubelet[2516]: E0813 07:10:38.801712 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd8575958-vfc6n_calico-apiserver(4bdd3ae4-af36-4635-bf89-707fda641eeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd8575958-vfc6n_calico-apiserver(4bdd3ae4-af36-4635-bf89-707fda641eeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd8575958-vfc6n" podUID="4bdd3ae4-af36-4635-bf89-707fda641eeb" Aug 13 07:10:38.804123 containerd[1471]: time="2025-08-13T07:10:38.804082554Z" level=error msg="Failed to destroy network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.804629 containerd[1471]: time="2025-08-13T07:10:38.804601127Z" level=error msg="encountered an error cleaning up failed sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.804695 containerd[1471]: time="2025-08-13T07:10:38.804650850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7nzk,Uid:89d9471b-7057-4239-b0e9-0c59341f2450,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.806127 kubelet[2516]: E0813 07:10:38.806089 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.806188 kubelet[2516]: E0813 07:10:38.806146 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g7nzk" Aug 13 07:10:38.806188 kubelet[2516]: E0813 07:10:38.806169 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g7nzk" Aug 13 07:10:38.806301 kubelet[2516]: E0813 07:10:38.806247 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-g7nzk_kube-system(89d9471b-7057-4239-b0e9-0c59341f2450)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-g7nzk_kube-system(89d9471b-7057-4239-b0e9-0c59341f2450)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g7nzk" podUID="89d9471b-7057-4239-b0e9-0c59341f2450" Aug 13 07:10:38.806725 containerd[1471]: time="2025-08-13T07:10:38.806692113Z" level=error msg="Failed to destroy network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.807067 containerd[1471]: time="2025-08-13T07:10:38.807041669Z" level=error msg="encountered an error cleaning up failed sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.807134 containerd[1471]: time="2025-08-13T07:10:38.807085992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-jsmst,Uid:8e4cc1e4-02e4-465a-8bb4-0feae995f17b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.807234 kubelet[2516]: E0813 07:10:38.807213 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.807316 kubelet[2516]: E0813 07:10:38.807269 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd8575958-jsmst" Aug 13 07:10:38.807316 kubelet[2516]: E0813 07:10:38.807286 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd8575958-jsmst" Aug 13 07:10:38.807386 kubelet[2516]: E0813 07:10:38.807309 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd8575958-jsmst_calico-apiserver(8e4cc1e4-02e4-465a-8bb4-0feae995f17b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd8575958-jsmst_calico-apiserver(8e4cc1e4-02e4-465a-8bb4-0feae995f17b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd8575958-jsmst" podUID="8e4cc1e4-02e4-465a-8bb4-0feae995f17b" Aug 13 07:10:38.810679 containerd[1471]: time="2025-08-13T07:10:38.810633433Z" level=error msg="Failed to destroy network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.811107 containerd[1471]: time="2025-08-13T07:10:38.811076175Z" level=error msg="encountered an error cleaning up failed sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.811148 containerd[1471]: time="2025-08-13T07:10:38.811129525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54495f4bd9-bw4q4,Uid:0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.811326 kubelet[2516]: E0813 07:10:38.811295 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:38.811419 kubelet[2516]: E0813 07:10:38.811333 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54495f4bd9-bw4q4" Aug 13 07:10:38.811419 kubelet[2516]: E0813 07:10:38.811351 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54495f4bd9-bw4q4" Aug 13 07:10:38.811419 kubelet[2516]: E0813 07:10:38.811383 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54495f4bd9-bw4q4_calico-system(0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54495f4bd9-bw4q4_calico-system(0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54495f4bd9-bw4q4" podUID="0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3" Aug 13 07:10:38.922409 kubelet[2516]: I0813 07:10:38.922266 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:10:38.925408 kubelet[2516]: I0813 07:10:38.925232 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:10:38.929779 containerd[1471]: time="2025-08-13T07:10:38.928928860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:10:38.929838 kubelet[2516]: I0813 07:10:38.929316 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:10:38.933934 kubelet[2516]: I0813 07:10:38.933900 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:10:38.939037 containerd[1471]: time="2025-08-13T07:10:38.938738637Z" level=info msg="StopPodSandbox for \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\"" Aug 13 07:10:38.939742 containerd[1471]: time="2025-08-13T07:10:38.938767241Z" level=info msg="StopPodSandbox for \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\"" Aug 13 07:10:38.941470 containerd[1471]: time="2025-08-13T07:10:38.941432495Z" level=info msg="StopPodSandbox for \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\"" Aug 13 07:10:38.944435 containerd[1471]: time="2025-08-13T07:10:38.944390328Z" level=info msg="Ensure that sandbox 2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e in task-service has been cleanup successfully" Aug 13 07:10:38.944869 containerd[1471]: time="2025-08-13T07:10:38.944840954Z" level=info msg="Ensure that sandbox b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc in task-service has been cleanup successfully" Aug 13 07:10:38.945345 containerd[1471]: time="2025-08-13T07:10:38.945323240Z" level=info msg="Ensure that sandbox 4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db in task-service has been cleanup successfully" Aug 13 07:10:38.949721 kubelet[2516]: I0813 07:10:38.949695 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:10:38.951057 containerd[1471]: time="2025-08-13T07:10:38.950618091Z" level=info msg="StopPodSandbox for \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\"" Aug 13 07:10:38.951057 containerd[1471]: time="2025-08-13T07:10:38.950790554Z" level=info msg="Ensure that sandbox ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb in task-service has been cleanup successfully" Aug 13 07:10:38.952566 containerd[1471]: time="2025-08-13T07:10:38.952519991Z" level=info msg="StopPodSandbox for \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\"" Aug 13 07:10:38.954083 kubelet[2516]: I0813 07:10:38.954059 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:10:38.954764 containerd[1471]: time="2025-08-13T07:10:38.954736784Z" level=info msg="StopPodSandbox for \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\"" Aug 13 07:10:38.955026 containerd[1471]: time="2025-08-13T07:10:38.955004566Z" level=info msg="Ensure that sandbox a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf in task-service has been cleanup successfully" Aug 13 07:10:38.959908 kubelet[2516]: I0813 07:10:38.959869 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:10:38.962565 containerd[1471]: time="2025-08-13T07:10:38.962176963Z" level=info msg="StopPodSandbox for \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\"" Aug 13 07:10:38.962565 containerd[1471]: time="2025-08-13T07:10:38.962364975Z" level=info msg="Ensure that sandbox ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad in task-service has been cleanup successfully" Aug 13 07:10:38.964546 containerd[1471]: time="2025-08-13T07:10:38.964471049Z" level=info msg="Ensure that sandbox 5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f in task-service has been cleanup successfully" Aug 13 07:10:39.022323 containerd[1471]: time="2025-08-13T07:10:39.022254774Z" level=error msg="StopPodSandbox for \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\" failed" error="failed to destroy network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.023896 kubelet[2516]: E0813 07:10:39.022529 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:10:39.023896 kubelet[2516]: E0813 07:10:39.022607 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f"} Aug 13 07:10:39.023896 kubelet[2516]: E0813 07:10:39.022667 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1735f9ec-98da-41a0-9a73-36a5bf3e9daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.023896 kubelet[2516]: E0813 07:10:39.022691 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1735f9ec-98da-41a0-9a73-36a5bf3e9daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gbvdn" podUID="1735f9ec-98da-41a0-9a73-36a5bf3e9daf" Aug 13 07:10:39.024541 containerd[1471]: time="2025-08-13T07:10:39.024474580Z" level=error msg="StopPodSandbox for \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\" failed" error="failed to destroy network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.024846 kubelet[2516]: E0813 07:10:39.024734 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:10:39.024846 kubelet[2516]: E0813 07:10:39.024765 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e"} Aug 13 07:10:39.024846 kubelet[2516]: E0813 07:10:39.024788 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e4cc1e4-02e4-465a-8bb4-0feae995f17b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.024846 kubelet[2516]: E0813 07:10:39.024806 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e4cc1e4-02e4-465a-8bb4-0feae995f17b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd8575958-jsmst" podUID="8e4cc1e4-02e4-465a-8bb4-0feae995f17b" Aug 13 07:10:39.028648 containerd[1471]: time="2025-08-13T07:10:39.028593663Z" level=error msg="StopPodSandbox for \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\" failed" error="failed to destroy network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.028921 kubelet[2516]: E0813 07:10:39.028870 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:10:39.029029 kubelet[2516]: E0813 07:10:39.028936 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc"} Aug 13 07:10:39.029029 kubelet[2516]: E0813 07:10:39.028994 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.029029 kubelet[2516]: E0813 07:10:39.029021 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54495f4bd9-bw4q4" podUID="0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3" Aug 13 07:10:39.035608 containerd[1471]: time="2025-08-13T07:10:39.035557216Z" level=error msg="StopPodSandbox for \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\" failed" error="failed to destroy network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.035827 kubelet[2516]: E0813 07:10:39.035787 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:10:39.035875 kubelet[2516]: E0813 07:10:39.035834 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db"} Aug 13 07:10:39.035910 kubelet[2516]: E0813 07:10:39.035884 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f33e7410-3fa9-4e29-b769-12773c11a6bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.035991 kubelet[2516]: E0813 07:10:39.035908 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f33e7410-3fa9-4e29-b769-12773c11a6bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-n4sgd" podUID="f33e7410-3fa9-4e29-b769-12773c11a6bf" Aug 13 07:10:39.036683 containerd[1471]: time="2025-08-13T07:10:39.036624640Z" level=error msg="StopPodSandbox for \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\" failed" error="failed to destroy network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.037147 kubelet[2516]: E0813 07:10:39.037102 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:10:39.037386 kubelet[2516]: E0813 07:10:39.037362 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb"} Aug 13 07:10:39.037504 kubelet[2516]: E0813 07:10:39.037445 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bdd3ae4-af36-4635-bf89-707fda641eeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.037504 kubelet[2516]: E0813 07:10:39.037476 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bdd3ae4-af36-4635-bf89-707fda641eeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd8575958-vfc6n" podUID="4bdd3ae4-af36-4635-bf89-707fda641eeb" Aug 13 07:10:39.038376 containerd[1471]: time="2025-08-13T07:10:39.038337405Z" level=error msg="StopPodSandbox for \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\" failed" error="failed to destroy network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.038519 kubelet[2516]: E0813 07:10:39.038495 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:10:39.038560 kubelet[2516]: E0813 07:10:39.038523 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad"} Aug 13 07:10:39.038560 kubelet[2516]: E0813 07:10:39.038543 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c48322e9-ac5f-466c-9de8-c661562c7150\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.038625 kubelet[2516]: E0813 07:10:39.038560 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c48322e9-ac5f-466c-9de8-c661562c7150\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd6485567-mjgd8" podUID="c48322e9-ac5f-466c-9de8-c661562c7150" Aug 13 07:10:39.040097 containerd[1471]: time="2025-08-13T07:10:39.040060711Z" level=error msg="StopPodSandbox for \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\" failed" error="failed to destroy network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.040233 kubelet[2516]: E0813 07:10:39.040196 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:10:39.040284 kubelet[2516]: E0813 07:10:39.040232 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf"} Aug 13 07:10:39.040284 kubelet[2516]: E0813 07:10:39.040260 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89d9471b-7057-4239-b0e9-0c59341f2450\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.040350 kubelet[2516]: E0813 07:10:39.040279 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89d9471b-7057-4239-b0e9-0c59341f2450\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g7nzk" podUID="89d9471b-7057-4239-b0e9-0c59341f2450" Aug 13 07:10:39.746491 systemd[1]: Created slice kubepods-besteffort-pod56153e13_236a_410f_9ff5_c48bb048b643.slice - libcontainer container kubepods-besteffort-pod56153e13_236a_410f_9ff5_c48bb048b643.slice. Aug 13 07:10:39.749360 containerd[1471]: time="2025-08-13T07:10:39.749313080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6twvn,Uid:56153e13-236a-410f-9ff5-c48bb048b643,Namespace:calico-system,Attempt:0,}" Aug 13 07:10:39.820113 containerd[1471]: time="2025-08-13T07:10:39.820027933Z" level=error msg="Failed to destroy network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.821168 containerd[1471]: time="2025-08-13T07:10:39.821116436Z" level=error msg="encountered an error cleaning up failed sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.821225 containerd[1471]: time="2025-08-13T07:10:39.821206275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6twvn,Uid:56153e13-236a-410f-9ff5-c48bb048b643,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.821535 kubelet[2516]: E0813 07:10:39.821489 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.822051 kubelet[2516]: E0813 07:10:39.821559 2516 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6twvn" Aug 13 07:10:39.822051 kubelet[2516]: E0813 07:10:39.821586 2516 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6twvn" Aug 13 07:10:39.822051 kubelet[2516]: E0813 07:10:39.821644 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6twvn_calico-system(56153e13-236a-410f-9ff5-c48bb048b643)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6twvn_calico-system(56153e13-236a-410f-9ff5-c48bb048b643)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:39.822597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4-shm.mount: Deactivated successfully. Aug 13 07:10:39.966743 kubelet[2516]: I0813 07:10:39.966683 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:10:39.967481 containerd[1471]: time="2025-08-13T07:10:39.967423819Z" level=info msg="StopPodSandbox for \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\"" Aug 13 07:10:39.967659 containerd[1471]: time="2025-08-13T07:10:39.967638572Z" level=info msg="Ensure that sandbox 9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4 in task-service has been cleanup successfully" Aug 13 07:10:39.997472 containerd[1471]: time="2025-08-13T07:10:39.997263968Z" level=error msg="StopPodSandbox for \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\" failed" error="failed to destroy network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:10:39.997703 kubelet[2516]: E0813 07:10:39.997623 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:10:39.997788 kubelet[2516]: E0813 07:10:39.997716 2516 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4"} Aug 13 07:10:39.997788 kubelet[2516]: E0813 07:10:39.997776 2516 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56153e13-236a-410f-9ff5-c48bb048b643\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:10:39.997911 kubelet[2516]: E0813 07:10:39.997815 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56153e13-236a-410f-9ff5-c48bb048b643\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6twvn" podUID="56153e13-236a-410f-9ff5-c48bb048b643" Aug 13 07:10:43.094956 systemd[1]: Started sshd@7-10.0.0.75:22-10.0.0.1:35936.service - OpenSSH per-connection server daemon (10.0.0.1:35936). Aug 13 07:10:43.257345 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 35936 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:10:43.259265 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:43.264797 systemd-logind[1459]: New session 8 of user core. Aug 13 07:10:43.272204 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:10:43.404207 sshd[3815]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:43.408878 systemd[1]: sshd@7-10.0.0.75:22-10.0.0.1:35936.service: Deactivated successfully. Aug 13 07:10:43.411038 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:10:43.411743 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:10:43.412821 systemd-logind[1459]: Removed session 8. Aug 13 07:10:46.254130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230098355.mount: Deactivated successfully. Aug 13 07:10:46.855892 containerd[1471]: time="2025-08-13T07:10:46.855819779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:46.857258 containerd[1471]: time="2025-08-13T07:10:46.857209286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:10:46.858420 containerd[1471]: time="2025-08-13T07:10:46.858379582Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:46.860880 containerd[1471]: time="2025-08-13T07:10:46.860807507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:46.861537 containerd[1471]: time="2025-08-13T07:10:46.861490398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.932516745s" Aug 13 07:10:46.861537 containerd[1471]: time="2025-08-13T07:10:46.861533619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:10:46.889353 containerd[1471]: time="2025-08-13T07:10:46.889273539Z" level=info msg="CreateContainer within sandbox \"ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:10:46.908627 containerd[1471]: time="2025-08-13T07:10:46.908551290Z" level=info msg="CreateContainer within sandbox \"ab086026084b71ab5aca8987c856e53bfa808c6a4a8ddb647ed6fa6c4ad92393\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f03cbb9a14f0698e55be47e4264b68ff40820edeb8a41c3be3be9ea25d403861\"" Aug 13 07:10:46.909606 containerd[1471]: time="2025-08-13T07:10:46.909463523Z" level=info msg="StartContainer for \"f03cbb9a14f0698e55be47e4264b68ff40820edeb8a41c3be3be9ea25d403861\"" Aug 13 07:10:46.967177 systemd[1]: Started cri-containerd-f03cbb9a14f0698e55be47e4264b68ff40820edeb8a41c3be3be9ea25d403861.scope - libcontainer container f03cbb9a14f0698e55be47e4264b68ff40820edeb8a41c3be3be9ea25d403861. Aug 13 07:10:47.194548 containerd[1471]: time="2025-08-13T07:10:47.194418492Z" level=info msg="StartContainer for \"f03cbb9a14f0698e55be47e4264b68ff40820edeb8a41c3be3be9ea25d403861\" returns successfully" Aug 13 07:10:47.207456 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:10:47.207636 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:10:47.343878 containerd[1471]: time="2025-08-13T07:10:47.343814583Z" level=info msg="StopPodSandbox for \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\"" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.433 [INFO][3903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.433 [INFO][3903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" iface="eth0" netns="/var/run/netns/cni-a0c8c3a2-0c83-4a05-6695-eb5cbfa83317" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.434 [INFO][3903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" iface="eth0" netns="/var/run/netns/cni-a0c8c3a2-0c83-4a05-6695-eb5cbfa83317" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.434 [INFO][3903] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" iface="eth0" netns="/var/run/netns/cni-a0c8c3a2-0c83-4a05-6695-eb5cbfa83317" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.434 [INFO][3903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.434 [INFO][3903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.496 [INFO][3912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.497 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.497 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.504 [WARNING][3912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.504 [INFO][3912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.506 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:47.512656 containerd[1471]: 2025-08-13 07:10:47.509 [INFO][3903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:10:47.513226 containerd[1471]: time="2025-08-13T07:10:47.512835135Z" level=info msg="TearDown network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\" successfully" Aug 13 07:10:47.513226 containerd[1471]: time="2025-08-13T07:10:47.512880580Z" level=info msg="StopPodSandbox for \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\" returns successfully" Aug 13 07:10:47.516213 systemd[1]: run-netns-cni\x2da0c8c3a2\x2d0c83\x2d4a05\x2d6695\x2deb5cbfa83317.mount: Deactivated successfully. Aug 13 07:10:47.696906 kubelet[2516]: I0813 07:10:47.696791 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-ca-bundle\") pod \"c48322e9-ac5f-466c-9de8-c661562c7150\" (UID: \"c48322e9-ac5f-466c-9de8-c661562c7150\") " Aug 13 07:10:47.696906 kubelet[2516]: I0813 07:10:47.696868 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c4kw\" (UniqueName: \"kubernetes.io/projected/c48322e9-ac5f-466c-9de8-c661562c7150-kube-api-access-9c4kw\") pod \"c48322e9-ac5f-466c-9de8-c661562c7150\" (UID: \"c48322e9-ac5f-466c-9de8-c661562c7150\") " Aug 13 07:10:47.696906 kubelet[2516]: I0813 07:10:47.696893 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-backend-key-pair\") pod \"c48322e9-ac5f-466c-9de8-c661562c7150\" (UID: \"c48322e9-ac5f-466c-9de8-c661562c7150\") " Aug 13 07:10:47.697846 kubelet[2516]: I0813 07:10:47.697650 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c48322e9-ac5f-466c-9de8-c661562c7150" (UID: "c48322e9-ac5f-466c-9de8-c661562c7150"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:10:47.703046 kubelet[2516]: I0813 07:10:47.702918 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c48322e9-ac5f-466c-9de8-c661562c7150" (UID: "c48322e9-ac5f-466c-9de8-c661562c7150"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:10:47.703605 kubelet[2516]: I0813 07:10:47.703543 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48322e9-ac5f-466c-9de8-c661562c7150-kube-api-access-9c4kw" (OuterVolumeSpecName: "kube-api-access-9c4kw") pod "c48322e9-ac5f-466c-9de8-c661562c7150" (UID: "c48322e9-ac5f-466c-9de8-c661562c7150"). InnerVolumeSpecName "kube-api-access-9c4kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:10:47.705278 systemd[1]: var-lib-kubelet-pods-c48322e9\x2dac5f\x2d466c\x2d9de8\x2dc661562c7150-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9c4kw.mount: Deactivated successfully. Aug 13 07:10:47.705404 systemd[1]: var-lib-kubelet-pods-c48322e9\x2dac5f\x2d466c\x2d9de8\x2dc661562c7150-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:10:47.747202 systemd[1]: Removed slice kubepods-besteffort-podc48322e9_ac5f_466c_9de8_c661562c7150.slice - libcontainer container kubepods-besteffort-podc48322e9_ac5f_466c_9de8_c661562c7150.slice. Aug 13 07:10:47.797342 kubelet[2516]: I0813 07:10:47.797293 2516 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 07:10:47.797342 kubelet[2516]: I0813 07:10:47.797325 2516 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c4kw\" (UniqueName: \"kubernetes.io/projected/c48322e9-ac5f-466c-9de8-c661562c7150-kube-api-access-9c4kw\") on node \"localhost\" DevicePath \"\"" Aug 13 07:10:47.797342 kubelet[2516]: I0813 07:10:47.797334 2516 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c48322e9-ac5f-466c-9de8-c661562c7150-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 07:10:48.005757 kubelet[2516]: I0813 07:10:48.005648 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z4jtm" podStartSLOduration=2.104566223 podStartE2EDuration="25.005625712s" podCreationTimestamp="2025-08-13 07:10:23 +0000 UTC" firstStartedPulling="2025-08-13 07:10:23.961726832 +0000 UTC m=+20.426632821" lastFinishedPulling="2025-08-13 07:10:46.862786321 +0000 UTC m=+43.327692310" observedRunningTime="2025-08-13 07:10:48.005234958 +0000 UTC m=+44.470140977" watchObservedRunningTime="2025-08-13 07:10:48.005625712 +0000 UTC m=+44.470531701" Aug 13 07:10:48.055938 systemd[1]: Created slice kubepods-besteffort-poda922834e_90d7_40ee_a111_ed9b5cafde75.slice - libcontainer container kubepods-besteffort-poda922834e_90d7_40ee_a111_ed9b5cafde75.slice. Aug 13 07:10:48.200239 kubelet[2516]: I0813 07:10:48.200161 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hsbt\" (UniqueName: \"kubernetes.io/projected/a922834e-90d7-40ee-a111-ed9b5cafde75-kube-api-access-6hsbt\") pod \"whisker-5647f6897-2p7wj\" (UID: \"a922834e-90d7-40ee-a111-ed9b5cafde75\") " pod="calico-system/whisker-5647f6897-2p7wj" Aug 13 07:10:48.200239 kubelet[2516]: I0813 07:10:48.200228 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a922834e-90d7-40ee-a111-ed9b5cafde75-whisker-ca-bundle\") pod \"whisker-5647f6897-2p7wj\" (UID: \"a922834e-90d7-40ee-a111-ed9b5cafde75\") " pod="calico-system/whisker-5647f6897-2p7wj" Aug 13 07:10:48.200466 kubelet[2516]: I0813 07:10:48.200261 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a922834e-90d7-40ee-a111-ed9b5cafde75-whisker-backend-key-pair\") pod \"whisker-5647f6897-2p7wj\" (UID: \"a922834e-90d7-40ee-a111-ed9b5cafde75\") " pod="calico-system/whisker-5647f6897-2p7wj" Aug 13 07:10:48.424830 systemd[1]: Started sshd@8-10.0.0.75:22-10.0.0.1:38888.service - OpenSSH per-connection server daemon (10.0.0.1:38888). Aug 13 07:10:48.487119 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 38888 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:10:48.489338 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:48.494259 systemd-logind[1459]: New session 9 of user core. Aug 13 07:10:48.504133 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:10:48.660453 containerd[1471]: time="2025-08-13T07:10:48.660378181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5647f6897-2p7wj,Uid:a922834e-90d7-40ee-a111-ed9b5cafde75,Namespace:calico-system,Attempt:0,}" Aug 13 07:10:48.791315 sshd[3934]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:48.797234 systemd[1]: sshd@8-10.0.0.75:22-10.0.0.1:38888.service: Deactivated successfully. Aug 13 07:10:48.801673 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:10:48.815042 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:10:48.816100 systemd-logind[1459]: Removed session 9. Aug 13 07:10:48.954036 kernel: bpftool[4100]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:10:49.065207 systemd-networkd[1402]: cali0e1d0fdc845: Link UP Aug 13 07:10:49.069443 systemd-networkd[1402]: cali0e1d0fdc845: Gained carrier Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.834 [INFO][4048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.856 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5647f6897--2p7wj-eth0 whisker-5647f6897- calico-system a922834e-90d7-40ee-a111-ed9b5cafde75 970 0 2025-08-13 07:10:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5647f6897 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5647f6897-2p7wj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0e1d0fdc845 [] [] }} ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.858 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-eth0" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.973 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" HandleID="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Workload="localhost-k8s-whisker--5647f6897--2p7wj-eth0" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.973 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" HandleID="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Workload="localhost-k8s-whisker--5647f6897--2p7wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5647f6897-2p7wj", "timestamp":"2025-08-13 07:10:48.973339467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.973 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.973 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.974 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.981 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.990 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.995 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.997 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.999 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:48.999 [INFO][4092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:49.000 [INFO][4092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400 Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:49.039 [INFO][4092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:49.046 [INFO][4092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:49.046 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" host="localhost" Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:49.046 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:49.100996 containerd[1471]: 2025-08-13 07:10:49.046 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" HandleID="k8s-pod-network.ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Workload="localhost-k8s-whisker--5647f6897--2p7wj-eth0" Aug 13 07:10:49.101704 containerd[1471]: 2025-08-13 07:10:49.052 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5647f6897--2p7wj-eth0", GenerateName:"whisker-5647f6897-", Namespace:"calico-system", SelfLink:"", UID:"a922834e-90d7-40ee-a111-ed9b5cafde75", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5647f6897", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5647f6897-2p7wj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e1d0fdc845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:49.101704 containerd[1471]: 2025-08-13 07:10:49.052 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-eth0" Aug 13 07:10:49.101704 containerd[1471]: 2025-08-13 07:10:49.052 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e1d0fdc845 ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-eth0" Aug 13 07:10:49.101704 containerd[1471]: 2025-08-13 07:10:49.080 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-eth0" Aug 13 07:10:49.101704 containerd[1471]: 2025-08-13 07:10:49.082 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5647f6897--2p7wj-eth0", GenerateName:"whisker-5647f6897-", Namespace:"calico-system", SelfLink:"", UID:"a922834e-90d7-40ee-a111-ed9b5cafde75", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5647f6897", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400", Pod:"whisker-5647f6897-2p7wj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e1d0fdc845", MAC:"5a:dd:21:8f:20:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:49.101704 containerd[1471]: 2025-08-13 07:10:49.091 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400" Namespace="calico-system" Pod="whisker-5647f6897-2p7wj" WorkloadEndpoint="localhost-k8s-whisker--5647f6897--2p7wj-eth0" Aug 13 07:10:49.136758 containerd[1471]: time="2025-08-13T07:10:49.136340215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:49.136758 containerd[1471]: time="2025-08-13T07:10:49.136426296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:49.136758 containerd[1471]: time="2025-08-13T07:10:49.136445833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:49.136758 containerd[1471]: time="2025-08-13T07:10:49.136591547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:49.165157 systemd[1]: Started cri-containerd-ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400.scope - libcontainer container ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400. Aug 13 07:10:49.179558 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:49.212845 containerd[1471]: time="2025-08-13T07:10:49.212382803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5647f6897-2p7wj,Uid:a922834e-90d7-40ee-a111-ed9b5cafde75,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400\"" Aug 13 07:10:49.218737 containerd[1471]: time="2025-08-13T07:10:49.218695917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:10:49.234391 systemd-networkd[1402]: vxlan.calico: Link UP Aug 13 07:10:49.234405 systemd-networkd[1402]: vxlan.calico: Gained carrier Aug 13 07:10:49.308839 systemd[1]: run-containerd-runc-k8s.io-f03cbb9a14f0698e55be47e4264b68ff40820edeb8a41c3be3be9ea25d403861-runc.neAvlD.mount: Deactivated successfully. Aug 13 07:10:49.742142 kubelet[2516]: I0813 07:10:49.742077 2516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48322e9-ac5f-466c-9de8-c661562c7150" path="/var/lib/kubelet/pods/c48322e9-ac5f-466c-9de8-c661562c7150/volumes" Aug 13 07:10:50.310301 systemd-networkd[1402]: cali0e1d0fdc845: Gained IPv6LL Aug 13 07:10:50.438179 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Aug 13 07:10:50.673348 containerd[1471]: time="2025-08-13T07:10:50.673195251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:50.674125 containerd[1471]: time="2025-08-13T07:10:50.674084840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:10:50.675418 containerd[1471]: time="2025-08-13T07:10:50.675370301Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:50.677854 containerd[1471]: time="2025-08-13T07:10:50.677816801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:50.678528 containerd[1471]: time="2025-08-13T07:10:50.678498800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.459560869s" Aug 13 07:10:50.678572 containerd[1471]: time="2025-08-13T07:10:50.678531982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:10:50.680566 containerd[1471]: time="2025-08-13T07:10:50.680539819Z" level=info msg="CreateContainer within sandbox \"ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:10:50.695372 containerd[1471]: time="2025-08-13T07:10:50.695310578Z" level=info msg="CreateContainer within sandbox \"ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"50f700beed5981564b4fdb19d48b01393a1853b907407eb7a1921955bf7b6fca\"" Aug 13 07:10:50.696319 containerd[1471]: time="2025-08-13T07:10:50.696227398Z" level=info msg="StartContainer for \"50f700beed5981564b4fdb19d48b01393a1853b907407eb7a1921955bf7b6fca\"" Aug 13 07:10:50.730147 systemd[1]: Started cri-containerd-50f700beed5981564b4fdb19d48b01393a1853b907407eb7a1921955bf7b6fca.scope - libcontainer container 50f700beed5981564b4fdb19d48b01393a1853b907407eb7a1921955bf7b6fca. Aug 13 07:10:50.739738 containerd[1471]: time="2025-08-13T07:10:50.739708867Z" level=info msg="StopPodSandbox for \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\"" Aug 13 07:10:50.779444 containerd[1471]: time="2025-08-13T07:10:50.779374257Z" level=info msg="StartContainer for \"50f700beed5981564b4fdb19d48b01393a1853b907407eb7a1921955bf7b6fca\" returns successfully" Aug 13 07:10:50.782270 containerd[1471]: time="2025-08-13T07:10:50.782211651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.796 [INFO][4312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.796 [INFO][4312] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" iface="eth0" netns="/var/run/netns/cni-84a2fb59-7bb2-def3-2697-f8486eae39ed" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.796 [INFO][4312] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" iface="eth0" netns="/var/run/netns/cni-84a2fb59-7bb2-def3-2697-f8486eae39ed" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.796 [INFO][4312] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" iface="eth0" netns="/var/run/netns/cni-84a2fb59-7bb2-def3-2697-f8486eae39ed" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.796 [INFO][4312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.796 [INFO][4312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.820 [INFO][4334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.821 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.821 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.827 [WARNING][4334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.827 [INFO][4334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.829 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:50.835652 containerd[1471]: 2025-08-13 07:10:50.832 [INFO][4312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:10:50.836330 containerd[1471]: time="2025-08-13T07:10:50.835847879Z" level=info msg="TearDown network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\" successfully" Aug 13 07:10:50.836330 containerd[1471]: time="2025-08-13T07:10:50.835876813Z" level=info msg="StopPodSandbox for \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\" returns successfully" Aug 13 07:10:50.836694 containerd[1471]: time="2025-08-13T07:10:50.836620168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-jsmst,Uid:8e4cc1e4-02e4-465a-8bb4-0feae995f17b,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:10:50.840052 systemd[1]: run-netns-cni\x2d84a2fb59\x2d7bb2\x2ddef3\x2d2697\x2df8486eae39ed.mount: Deactivated successfully. Aug 13 07:10:50.959028 systemd-networkd[1402]: cali497bfe1a3db: Link UP Aug 13 07:10:50.959312 systemd-networkd[1402]: cali497bfe1a3db: Gained carrier Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.889 [INFO][4344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0 calico-apiserver-7cd8575958- calico-apiserver 8e4cc1e4-02e4-465a-8bb4-0feae995f17b 995 0 2025-08-13 07:10:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd8575958 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd8575958-jsmst eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali497bfe1a3db [] [] }} ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.889 [INFO][4344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.916 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" HandleID="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.917 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" HandleID="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd8575958-jsmst", "timestamp":"2025-08-13 07:10:50.916955613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.917 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.917 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.917 [INFO][4360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.924 [INFO][4360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.928 [INFO][4360] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.934 [INFO][4360] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.937 [INFO][4360] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.940 [INFO][4360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.940 [INFO][4360] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.941 [INFO][4360] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.944 [INFO][4360] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.950 [INFO][4360] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.950 [INFO][4360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" host="localhost" Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.951 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:50.976420 containerd[1471]: 2025-08-13 07:10:50.951 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" HandleID="k8s-pod-network.e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.977001 containerd[1471]: 2025-08-13 07:10:50.955 [INFO][4344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e4cc1e4-02e4-465a-8bb4-0feae995f17b", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd8575958-jsmst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali497bfe1a3db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:50.977001 containerd[1471]: 2025-08-13 07:10:50.955 [INFO][4344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.977001 containerd[1471]: 2025-08-13 07:10:50.956 [INFO][4344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali497bfe1a3db ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.977001 containerd[1471]: 2025-08-13 07:10:50.959 [INFO][4344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.977001 containerd[1471]: 2025-08-13 07:10:50.959 [INFO][4344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e4cc1e4-02e4-465a-8bb4-0feae995f17b", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e", Pod:"calico-apiserver-7cd8575958-jsmst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali497bfe1a3db", MAC:"a2:4b:e9:00:73:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:50.977001 containerd[1471]: 2025-08-13 07:10:50.970 [INFO][4344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-jsmst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:10:50.998087 containerd[1471]: time="2025-08-13T07:10:50.997383372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:50.998087 containerd[1471]: time="2025-08-13T07:10:50.997455828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:50.998087 containerd[1471]: time="2025-08-13T07:10:50.997471507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:50.998087 containerd[1471]: time="2025-08-13T07:10:50.997560254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:51.021127 systemd[1]: Started cri-containerd-e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e.scope - libcontainer container e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e. Aug 13 07:10:51.034529 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:51.058553 containerd[1471]: time="2025-08-13T07:10:51.058497033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-jsmst,Uid:8e4cc1e4-02e4-465a-8bb4-0feae995f17b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e\"" Aug 13 07:10:51.740835 containerd[1471]: time="2025-08-13T07:10:51.740771919Z" level=info msg="StopPodSandbox for \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\"" Aug 13 07:10:51.741573 containerd[1471]: time="2025-08-13T07:10:51.741529681Z" level=info msg="StopPodSandbox for \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\"" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.798 [INFO][4445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.800 [INFO][4445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" iface="eth0" netns="/var/run/netns/cni-3501e792-4098-6300-16af-b78f1cae71e7" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.800 [INFO][4445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" iface="eth0" netns="/var/run/netns/cni-3501e792-4098-6300-16af-b78f1cae71e7" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.801 [INFO][4445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" iface="eth0" netns="/var/run/netns/cni-3501e792-4098-6300-16af-b78f1cae71e7" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.801 [INFO][4445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.801 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.826 [INFO][4459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.826 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.826 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.832 [WARNING][4459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.832 [INFO][4459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.833 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:51.840042 containerd[1471]: 2025-08-13 07:10:51.836 [INFO][4445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:10:51.840952 containerd[1471]: time="2025-08-13T07:10:51.840855864Z" level=info msg="TearDown network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\" successfully" Aug 13 07:10:51.840952 containerd[1471]: time="2025-08-13T07:10:51.840895418Z" level=info msg="StopPodSandbox for \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\" returns successfully" Aug 13 07:10:51.842327 containerd[1471]: time="2025-08-13T07:10:51.842261220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6twvn,Uid:56153e13-236a-410f-9ff5-c48bb048b643,Namespace:calico-system,Attempt:1,}" Aug 13 07:10:51.845406 systemd[1]: run-netns-cni\x2d3501e792\x2d4098\x2d6300\x2d16af\x2db78f1cae71e7.mount: Deactivated successfully. Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.799 [INFO][4436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.799 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" iface="eth0" netns="/var/run/netns/cni-9dc32f62-2d3a-1649-f216-924ac8ad1cb3" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.799 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" iface="eth0" netns="/var/run/netns/cni-9dc32f62-2d3a-1649-f216-924ac8ad1cb3" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.800 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" iface="eth0" netns="/var/run/netns/cni-9dc32f62-2d3a-1649-f216-924ac8ad1cb3" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.800 [INFO][4436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.800 [INFO][4436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.826 [INFO][4457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.827 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.833 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.839 [WARNING][4457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.839 [INFO][4457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.841 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:51.852159 containerd[1471]: 2025-08-13 07:10:51.847 [INFO][4436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:10:51.852634 containerd[1471]: time="2025-08-13T07:10:51.852331581Z" level=info msg="TearDown network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\" successfully" Aug 13 07:10:51.852634 containerd[1471]: time="2025-08-13T07:10:51.852358682Z" level=info msg="StopPodSandbox for \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\" returns successfully" Aug 13 07:10:51.853340 containerd[1471]: time="2025-08-13T07:10:51.853137282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54495f4bd9-bw4q4,Uid:0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3,Namespace:calico-system,Attempt:1,}" Aug 13 07:10:51.856894 systemd[1]: run-netns-cni\x2d9dc32f62\x2d2d3a\x2d1649\x2df216\x2d924ac8ad1cb3.mount: Deactivated successfully. Aug 13 07:10:51.987244 systemd-networkd[1402]: calid9e7608e1db: Link UP Aug 13 07:10:51.988734 systemd-networkd[1402]: calid9e7608e1db: Gained carrier Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.911 [INFO][4473] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6twvn-eth0 csi-node-driver- calico-system 56153e13-236a-410f-9ff5-c48bb048b643 1006 0 2025-08-13 07:10:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6twvn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid9e7608e1db [] [] }} ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.911 [INFO][4473] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.942 [INFO][4502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" HandleID="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.942 [INFO][4502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" HandleID="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6twvn", "timestamp":"2025-08-13 07:10:51.942074817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.942 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.942 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.942 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.949 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.956 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.961 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.963 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.965 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.965 [INFO][4502] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.967 [INFO][4502] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.971 [INFO][4502] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.979 [INFO][4502] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.979 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" host="localhost" Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.979 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:52.007321 containerd[1471]: 2025-08-13 07:10:51.979 [INFO][4502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" HandleID="k8s-pod-network.889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:52.007869 containerd[1471]: 2025-08-13 07:10:51.983 [INFO][4473] cni-plugin/k8s.go 418: Populated endpoint ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6twvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56153e13-236a-410f-9ff5-c48bb048b643", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6twvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9e7608e1db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:52.007869 containerd[1471]: 2025-08-13 07:10:51.983 [INFO][4473] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:52.007869 containerd[1471]: 2025-08-13 07:10:51.983 [INFO][4473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9e7608e1db ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:52.007869 containerd[1471]: 2025-08-13 07:10:51.988 [INFO][4473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:52.007869 containerd[1471]: 2025-08-13 07:10:51.991 [INFO][4473] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6twvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56153e13-236a-410f-9ff5-c48bb048b643", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d", Pod:"csi-node-driver-6twvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9e7608e1db", MAC:"d2:65:59:f8:94:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:52.007869 containerd[1471]: 2025-08-13 07:10:52.002 [INFO][4473] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d" Namespace="calico-system" Pod="csi-node-driver-6twvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:10:52.030292 containerd[1471]: time="2025-08-13T07:10:52.030165000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:52.030292 containerd[1471]: time="2025-08-13T07:10:52.030257484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:52.030292 containerd[1471]: time="2025-08-13T07:10:52.030274836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:52.030533 containerd[1471]: time="2025-08-13T07:10:52.030392557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:52.052187 systemd[1]: Started cri-containerd-889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d.scope - libcontainer container 889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d. Aug 13 07:10:52.069334 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:52.087345 containerd[1471]: time="2025-08-13T07:10:52.087302529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6twvn,Uid:56153e13-236a-410f-9ff5-c48bb048b643,Namespace:calico-system,Attempt:1,} returns sandbox id \"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d\"" Aug 13 07:10:52.095349 systemd-networkd[1402]: calid33587a11a0: Link UP Aug 13 07:10:52.096285 systemd-networkd[1402]: calid33587a11a0: Gained carrier Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:51.911 [INFO][4484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0 calico-kube-controllers-54495f4bd9- calico-system 0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3 1007 0 2025-08-13 07:10:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54495f4bd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54495f4bd9-bw4q4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid33587a11a0 [] [] }} ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:51.911 [INFO][4484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:51.952 [INFO][4504] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" HandleID="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:51.953 [INFO][4504] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" HandleID="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54495f4bd9-bw4q4", "timestamp":"2025-08-13 07:10:51.952865149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:51.953 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:51.979 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:51.979 [INFO][4504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.051 [INFO][4504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.057 [INFO][4504] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.061 [INFO][4504] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.064 [INFO][4504] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.066 [INFO][4504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.066 [INFO][4504] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.068 [INFO][4504] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03 Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.073 [INFO][4504] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.087 [INFO][4504] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.088 [INFO][4504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" host="localhost" Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.088 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:52.118499 containerd[1471]: 2025-08-13 07:10:52.088 [INFO][4504] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" HandleID="k8s-pod-network.dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:52.119299 containerd[1471]: 2025-08-13 07:10:52.092 [INFO][4484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0", GenerateName:"calico-kube-controllers-54495f4bd9-", Namespace:"calico-system", SelfLink:"", UID:"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54495f4bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54495f4bd9-bw4q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid33587a11a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:52.119299 containerd[1471]: 2025-08-13 07:10:52.092 [INFO][4484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:52.119299 containerd[1471]: 2025-08-13 07:10:52.092 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid33587a11a0 ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:52.119299 containerd[1471]: 2025-08-13 07:10:52.097 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:52.119299 containerd[1471]: 2025-08-13 07:10:52.098 [INFO][4484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0", GenerateName:"calico-kube-controllers-54495f4bd9-", Namespace:"calico-system", SelfLink:"", UID:"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54495f4bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03", Pod:"calico-kube-controllers-54495f4bd9-bw4q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid33587a11a0", MAC:"c6:70:1e:4e:c9:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:52.119299 containerd[1471]: 2025-08-13 07:10:52.113 [INFO][4484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03" Namespace="calico-system" Pod="calico-kube-controllers-54495f4bd9-bw4q4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:10:52.152150 containerd[1471]: time="2025-08-13T07:10:52.151989367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:52.152150 containerd[1471]: time="2025-08-13T07:10:52.152060571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:52.152150 containerd[1471]: time="2025-08-13T07:10:52.152071822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:52.152451 containerd[1471]: time="2025-08-13T07:10:52.152267179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:52.177233 systemd[1]: Started cri-containerd-dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03.scope - libcontainer container dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03. Aug 13 07:10:52.195167 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:52.225779 containerd[1471]: time="2025-08-13T07:10:52.225709601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54495f4bd9-bw4q4,Uid:0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03\"" Aug 13 07:10:52.486207 systemd-networkd[1402]: cali497bfe1a3db: Gained IPv6LL Aug 13 07:10:52.740686 containerd[1471]: time="2025-08-13T07:10:52.740514287Z" level=info msg="StopPodSandbox for \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\"" Aug 13 07:10:52.740686 containerd[1471]: time="2025-08-13T07:10:52.740570713Z" level=info msg="StopPodSandbox for \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\"" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.880 [INFO][4635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.880 [INFO][4635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" iface="eth0" netns="/var/run/netns/cni-ea452de9-8a10-0cbe-7105-bd47963db13c" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.881 [INFO][4635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" iface="eth0" netns="/var/run/netns/cni-ea452de9-8a10-0cbe-7105-bd47963db13c" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.884 [INFO][4635] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" iface="eth0" netns="/var/run/netns/cni-ea452de9-8a10-0cbe-7105-bd47963db13c" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.884 [INFO][4635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.884 [INFO][4635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.913 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.914 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.914 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.924 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.925 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.927 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:52.935796 containerd[1471]: 2025-08-13 07:10:52.931 [INFO][4635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:10:52.937086 containerd[1471]: time="2025-08-13T07:10:52.937006354Z" level=info msg="TearDown network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\" successfully" Aug 13 07:10:52.937086 containerd[1471]: time="2025-08-13T07:10:52.937056007Z" level=info msg="StopPodSandbox for \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\" returns successfully" Aug 13 07:10:52.937969 containerd[1471]: time="2025-08-13T07:10:52.937902024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-vfc6n,Uid:4bdd3ae4-af36-4635-bf89-707fda641eeb,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:10:52.940063 systemd[1]: run-netns-cni\x2dea452de9\x2d8a10\x2d0cbe\x2d7105\x2dbd47963db13c.mount: Deactivated successfully. Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.878 [INFO][4643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.878 [INFO][4643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" iface="eth0" netns="/var/run/netns/cni-800c759c-36c8-18f7-b30f-3c5c273621bb" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.879 [INFO][4643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" iface="eth0" netns="/var/run/netns/cni-800c759c-36c8-18f7-b30f-3c5c273621bb" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.880 [INFO][4643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" iface="eth0" netns="/var/run/netns/cni-800c759c-36c8-18f7-b30f-3c5c273621bb" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.880 [INFO][4643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.880 [INFO][4643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.919 [INFO][4654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.919 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.927 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.934 [WARNING][4654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.934 [INFO][4654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.983 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:52.989331 containerd[1471]: 2025-08-13 07:10:52.986 [INFO][4643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:10:52.989929 containerd[1471]: time="2025-08-13T07:10:52.989592335Z" level=info msg="TearDown network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\" successfully" Aug 13 07:10:52.989929 containerd[1471]: time="2025-08-13T07:10:52.989632310Z" level=info msg="StopPodSandbox for \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\" returns successfully" Aug 13 07:10:52.990242 kubelet[2516]: E0813 07:10:52.990188 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:52.991772 containerd[1471]: time="2025-08-13T07:10:52.991290691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gbvdn,Uid:1735f9ec-98da-41a0-9a73-36a5bf3e9daf,Namespace:kube-system,Attempt:1,}" Aug 13 07:10:52.993813 systemd[1]: run-netns-cni\x2d800c759c\x2d36c8\x2d18f7\x2db30f\x2d3c5c273621bb.mount: Deactivated successfully. Aug 13 07:10:53.190405 systemd-networkd[1402]: calid9e7608e1db: Gained IPv6LL Aug 13 07:10:53.235284 systemd-networkd[1402]: calief7d5c0295e: Link UP Aug 13 07:10:53.235617 systemd-networkd[1402]: calief7d5c0295e: Gained carrier Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.146 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0 calico-apiserver-7cd8575958- calico-apiserver 4bdd3ae4-af36-4635-bf89-707fda641eeb 1024 0 2025-08-13 07:10:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd8575958 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd8575958-vfc6n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calief7d5c0295e [] [] }} ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.146 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4706] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" HandleID="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4706] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" HandleID="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd8575958-vfc6n", "timestamp":"2025-08-13 07:10:53.180165377 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4706] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.186 [INFO][4706] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.192 [INFO][4706] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.201 [INFO][4706] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.203 [INFO][4706] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.206 [INFO][4706] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.206 [INFO][4706] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.208 [INFO][4706] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086 Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.215 [INFO][4706] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.222 [INFO][4706] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.222 [INFO][4706] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" host="localhost" Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.222 [INFO][4706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:53.259013 containerd[1471]: 2025-08-13 07:10:53.222 [INFO][4706] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" HandleID="k8s-pod-network.e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:53.259871 containerd[1471]: 2025-08-13 07:10:53.227 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bdd3ae4-af36-4635-bf89-707fda641eeb", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd8575958-vfc6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief7d5c0295e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:53.259871 containerd[1471]: 2025-08-13 07:10:53.227 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:53.259871 containerd[1471]: 2025-08-13 07:10:53.227 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief7d5c0295e ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:53.259871 containerd[1471]: 2025-08-13 07:10:53.236 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:53.259871 containerd[1471]: 2025-08-13 07:10:53.237 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bdd3ae4-af36-4635-bf89-707fda641eeb", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086", Pod:"calico-apiserver-7cd8575958-vfc6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief7d5c0295e", MAC:"3e:cf:2c:4a:08:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:53.259871 containerd[1471]: 2025-08-13 07:10:53.250 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086" Namespace="calico-apiserver" Pod="calico-apiserver-7cd8575958-vfc6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:10:53.321685 containerd[1471]: time="2025-08-13T07:10:53.321513557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:53.321685 containerd[1471]: time="2025-08-13T07:10:53.321591473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:53.321685 containerd[1471]: time="2025-08-13T07:10:53.321628993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:53.322627 containerd[1471]: time="2025-08-13T07:10:53.321857261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:53.343327 systemd-networkd[1402]: cali145fd3394bf: Link UP Aug 13 07:10:53.345046 systemd-networkd[1402]: cali145fd3394bf: Gained carrier Aug 13 07:10:53.350226 systemd[1]: Started cri-containerd-e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086.scope - libcontainer container e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086. Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.141 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0 coredns-7c65d6cfc9- kube-system 1735f9ec-98da-41a0-9a73-36a5bf3e9daf 1023 0 2025-08-13 07:10:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gbvdn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali145fd3394bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.141 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" HandleID="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" HandleID="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af0b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gbvdn", "timestamp":"2025-08-13 07:10:53.180186246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.180 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.222 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.223 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.288 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.296 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.306 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.309 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.313 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.313 [INFO][4704] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.315 [INFO][4704] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.321 [INFO][4704] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.330 [INFO][4704] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.330 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" host="localhost" Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.331 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:53.364818 containerd[1471]: 2025-08-13 07:10:53.331 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" HandleID="k8s-pod-network.70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:53.365810 containerd[1471]: 2025-08-13 07:10:53.339 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1735f9ec-98da-41a0-9a73-36a5bf3e9daf", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gbvdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali145fd3394bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:53.365810 containerd[1471]: 2025-08-13 07:10:53.339 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:53.365810 containerd[1471]: 2025-08-13 07:10:53.339 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali145fd3394bf ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:53.365810 containerd[1471]: 2025-08-13 07:10:53.346 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:53.365810 containerd[1471]: 2025-08-13 07:10:53.346 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1735f9ec-98da-41a0-9a73-36a5bf3e9daf", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed", Pod:"coredns-7c65d6cfc9-gbvdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali145fd3394bf", MAC:"9e:e1:75:a1:c6:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:53.365810 containerd[1471]: 2025-08-13 07:10:53.358 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gbvdn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:10:53.380051 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:53.402374 containerd[1471]: time="2025-08-13T07:10:53.402172781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:53.402374 containerd[1471]: time="2025-08-13T07:10:53.402231621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:53.402374 containerd[1471]: time="2025-08-13T07:10:53.402243333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:53.402374 containerd[1471]: time="2025-08-13T07:10:53.402336197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:53.415587 containerd[1471]: time="2025-08-13T07:10:53.415413897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd8575958-vfc6n,Uid:4bdd3ae4-af36-4635-bf89-707fda641eeb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086\"" Aug 13 07:10:53.435286 systemd[1]: Started cri-containerd-70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed.scope - libcontainer container 70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed. Aug 13 07:10:53.452436 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:53.496937 containerd[1471]: time="2025-08-13T07:10:53.496876028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gbvdn,Uid:1735f9ec-98da-41a0-9a73-36a5bf3e9daf,Namespace:kube-system,Attempt:1,} returns sandbox id \"70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed\"" Aug 13 07:10:53.498625 kubelet[2516]: E0813 07:10:53.498210 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:53.501698 containerd[1471]: time="2025-08-13T07:10:53.501583407Z" level=info msg="CreateContainer within sandbox \"70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:10:53.528355 containerd[1471]: time="2025-08-13T07:10:53.528301704Z" level=info msg="CreateContainer within sandbox \"70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc4f62fbe1e9c459f1bdf98d7886ea96d2bb22ac9f86d674c62dc21f3357ccf2\"" Aug 13 07:10:53.531410 containerd[1471]: time="2025-08-13T07:10:53.530437750Z" level=info msg="StartContainer for \"fc4f62fbe1e9c459f1bdf98d7886ea96d2bb22ac9f86d674c62dc21f3357ccf2\"" Aug 13 07:10:53.562169 systemd[1]: Started cri-containerd-fc4f62fbe1e9c459f1bdf98d7886ea96d2bb22ac9f86d674c62dc21f3357ccf2.scope - libcontainer container fc4f62fbe1e9c459f1bdf98d7886ea96d2bb22ac9f86d674c62dc21f3357ccf2. Aug 13 07:10:53.698415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499587819.mount: Deactivated successfully. Aug 13 07:10:53.736400 containerd[1471]: time="2025-08-13T07:10:53.736335737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:53.738067 containerd[1471]: time="2025-08-13T07:10:53.737995029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:10:53.739551 containerd[1471]: time="2025-08-13T07:10:53.739509259Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:53.743047 containerd[1471]: time="2025-08-13T07:10:53.741617855Z" level=info msg="StopPodSandbox for \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\"" Aug 13 07:10:53.743047 containerd[1471]: time="2025-08-13T07:10:53.741695570Z" level=info msg="StopPodSandbox for \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\"" Aug 13 07:10:53.743047 containerd[1471]: time="2025-08-13T07:10:53.742240762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:53.743816 containerd[1471]: time="2025-08-13T07:10:53.743788696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.961527523s" Aug 13 07:10:53.743923 containerd[1471]: time="2025-08-13T07:10:53.743906848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:10:53.745654 containerd[1471]: time="2025-08-13T07:10:53.745634809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:10:53.746570 containerd[1471]: time="2025-08-13T07:10:53.746479674Z" level=info msg="StartContainer for \"fc4f62fbe1e9c459f1bdf98d7886ea96d2bb22ac9f86d674c62dc21f3357ccf2\" returns successfully" Aug 13 07:10:53.747838 containerd[1471]: time="2025-08-13T07:10:53.747795842Z" level=info msg="CreateContainer within sandbox \"ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:10:53.769400 containerd[1471]: time="2025-08-13T07:10:53.769269941Z" level=info msg="CreateContainer within sandbox \"ac0f7b07f114c379fdf78d5a9242aba809f73b3e451a4cc8cf9fc2bbed2a4400\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e1089ecb64b8b239c405fb17133c9398afa6586ba11229931d4f2a1db53a6262\"" Aug 13 07:10:53.770387 containerd[1471]: time="2025-08-13T07:10:53.770006163Z" level=info msg="StartContainer for \"e1089ecb64b8b239c405fb17133c9398afa6586ba11229931d4f2a1db53a6262\"" Aug 13 07:10:53.812281 systemd[1]: Started sshd@9-10.0.0.75:22-10.0.0.1:38890.service - OpenSSH per-connection server daemon (10.0.0.1:38890). Aug 13 07:10:53.835258 systemd[1]: Started cri-containerd-e1089ecb64b8b239c405fb17133c9398afa6586ba11229931d4f2a1db53a6262.scope - libcontainer container e1089ecb64b8b239c405fb17133c9398afa6586ba11229931d4f2a1db53a6262. Aug 13 07:10:53.901103 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 38890 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:10:53.903944 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.823 [INFO][4883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.824 [INFO][4883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" iface="eth0" netns="/var/run/netns/cni-c9bebd6b-db0c-654c-c09a-e415f25c1c0b" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.824 [INFO][4883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" iface="eth0" netns="/var/run/netns/cni-c9bebd6b-db0c-654c-c09a-e415f25c1c0b" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.824 [INFO][4883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" iface="eth0" netns="/var/run/netns/cni-c9bebd6b-db0c-654c-c09a-e415f25c1c0b" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.824 [INFO][4883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.824 [INFO][4883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.880 [INFO][4922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.882 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.882 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.890 [WARNING][4922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.890 [INFO][4922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.892 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:53.905364 containerd[1471]: 2025-08-13 07:10:53.896 [INFO][4883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:10:53.906655 containerd[1471]: time="2025-08-13T07:10:53.906620473Z" level=info msg="TearDown network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\" successfully" Aug 13 07:10:53.906655 containerd[1471]: time="2025-08-13T07:10:53.906653335Z" level=info msg="StopPodSandbox for \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\" returns successfully" Aug 13 07:10:53.907918 containerd[1471]: time="2025-08-13T07:10:53.907756013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n4sgd,Uid:f33e7410-3fa9-4e29-b769-12773c11a6bf,Namespace:calico-system,Attempt:1,}" Aug 13 07:10:53.912433 systemd-logind[1459]: New session 10 of user core. Aug 13 07:10:53.919183 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:10:53.927835 containerd[1471]: time="2025-08-13T07:10:53.927772508Z" level=info msg="StartContainer for \"e1089ecb64b8b239c405fb17133c9398afa6586ba11229931d4f2a1db53a6262\" returns successfully" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.846 [INFO][4889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.846 [INFO][4889] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" iface="eth0" netns="/var/run/netns/cni-62bce08b-a9b7-cb40-f3f2-4685fe852013" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.847 [INFO][4889] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" iface="eth0" netns="/var/run/netns/cni-62bce08b-a9b7-cb40-f3f2-4685fe852013" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.847 [INFO][4889] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" iface="eth0" netns="/var/run/netns/cni-62bce08b-a9b7-cb40-f3f2-4685fe852013" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.847 [INFO][4889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.847 [INFO][4889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.890 [INFO][4934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.891 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.892 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.905 [WARNING][4934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.906 [INFO][4934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.910 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:53.933954 containerd[1471]: 2025-08-13 07:10:53.924 [INFO][4889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:10:53.935153 containerd[1471]: time="2025-08-13T07:10:53.934557245Z" level=info msg="TearDown network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\" successfully" Aug 13 07:10:53.935153 containerd[1471]: time="2025-08-13T07:10:53.934596298Z" level=info msg="StopPodSandbox for \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\" returns successfully" Aug 13 07:10:53.935277 kubelet[2516]: E0813 07:10:53.935087 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:53.935795 containerd[1471]: time="2025-08-13T07:10:53.935764309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7nzk,Uid:89d9471b-7057-4239-b0e9-0c59341f2450,Namespace:kube-system,Attempt:1,}" Aug 13 07:10:53.958270 systemd-networkd[1402]: calid33587a11a0: Gained IPv6LL Aug 13 07:10:54.031280 kubelet[2516]: E0813 07:10:54.028970 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:54.049283 kubelet[2516]: I0813 07:10:54.048823 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gbvdn" podStartSLOduration=45.048805108 podStartE2EDuration="45.048805108s" podCreationTimestamp="2025-08-13 07:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:54.045863471 +0000 UTC m=+50.510769460" watchObservedRunningTime="2025-08-13 07:10:54.048805108 +0000 UTC m=+50.513711098" Aug 13 07:10:54.144287 sshd[4918]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:54.151517 systemd[1]: sshd@9-10.0.0.75:22-10.0.0.1:38890.service: Deactivated successfully. Aug 13 07:10:54.157702 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:10:54.160403 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:10:54.161434 systemd-logind[1459]: Removed session 10. Aug 13 07:10:54.198239 systemd-networkd[1402]: cali00c6b201b0c: Link UP Aug 13 07:10:54.199397 systemd-networkd[1402]: cali00c6b201b0c: Gained carrier Aug 13 07:10:54.210119 kubelet[2516]: I0813 07:10:54.210045 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5647f6897-2p7wj" podStartSLOduration=1.682595739 podStartE2EDuration="6.210018809s" podCreationTimestamp="2025-08-13 07:10:48 +0000 UTC" firstStartedPulling="2025-08-13 07:10:49.21795642 +0000 UTC m=+45.682862409" lastFinishedPulling="2025-08-13 07:10:53.74537949 +0000 UTC m=+50.210285479" observedRunningTime="2025-08-13 07:10:54.08981083 +0000 UTC m=+50.554716820" watchObservedRunningTime="2025-08-13 07:10:54.210018809 +0000 UTC m=+50.674924798" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.025 [INFO][4976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0 coredns-7c65d6cfc9- kube-system 89d9471b-7057-4239-b0e9-0c59341f2450 1045 0 2025-08-13 07:10:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-g7nzk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali00c6b201b0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.025 [INFO][4976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.138 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" HandleID="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.140 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" HandleID="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000478660), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-g7nzk", "timestamp":"2025-08-13 07:10:54.138472142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.140 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.140 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.140 [INFO][5003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.147 [INFO][5003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.162 [INFO][5003] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.171 [INFO][5003] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.173 [INFO][5003] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.175 [INFO][5003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.175 [INFO][5003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.179 [INFO][5003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134 Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.183 [INFO][5003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.188 [INFO][5003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.188 [INFO][5003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" host="localhost" Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.188 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:54.215097 containerd[1471]: 2025-08-13 07:10:54.188 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" HandleID="k8s-pod-network.470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:54.216691 containerd[1471]: 2025-08-13 07:10:54.193 [INFO][4976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"89d9471b-7057-4239-b0e9-0c59341f2450", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-g7nzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00c6b201b0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:54.216691 containerd[1471]: 2025-08-13 07:10:54.193 [INFO][4976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:54.216691 containerd[1471]: 2025-08-13 07:10:54.193 [INFO][4976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00c6b201b0c ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:54.216691 containerd[1471]: 2025-08-13 07:10:54.200 [INFO][4976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:54.216691 containerd[1471]: 2025-08-13 07:10:54.200 [INFO][4976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"89d9471b-7057-4239-b0e9-0c59341f2450", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134", Pod:"coredns-7c65d6cfc9-g7nzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00c6b201b0c", MAC:"ee:13:46:2e:07:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:54.216691 containerd[1471]: 2025-08-13 07:10:54.211 [INFO][4976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7nzk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:10:54.244447 containerd[1471]: time="2025-08-13T07:10:54.243456386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:54.244447 containerd[1471]: time="2025-08-13T07:10:54.244345123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:54.244447 containerd[1471]: time="2025-08-13T07:10:54.244365701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:54.244696 containerd[1471]: time="2025-08-13T07:10:54.244528947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:54.266212 systemd[1]: Started cri-containerd-470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134.scope - libcontainer container 470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134. Aug 13 07:10:54.282548 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:54.309473 containerd[1471]: time="2025-08-13T07:10:54.309421454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7nzk,Uid:89d9471b-7057-4239-b0e9-0c59341f2450,Namespace:kube-system,Attempt:1,} returns sandbox id \"470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134\"" Aug 13 07:10:54.310626 kubelet[2516]: E0813 07:10:54.310598 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:54.313004 containerd[1471]: time="2025-08-13T07:10:54.312882997Z" level=info msg="CreateContainer within sandbox \"470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:10:54.535256 systemd-networkd[1402]: calief7d5c0295e: Gained IPv6LL Aug 13 07:10:54.562372 systemd-networkd[1402]: cali80c3bc74afa: Link UP Aug 13 07:10:54.562613 systemd-networkd[1402]: cali80c3bc74afa: Gained carrier Aug 13 07:10:54.583618 containerd[1471]: time="2025-08-13T07:10:54.583562296Z" level=info msg="CreateContainer within sandbox \"470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9722d2d5456d7418002722d90412f4cd73a9a403708e3ec4e54a6ba1e24da35d\"" Aug 13 07:10:54.584259 containerd[1471]: time="2025-08-13T07:10:54.584186327Z" level=info msg="StartContainer for \"9722d2d5456d7418002722d90412f4cd73a9a403708e3ec4e54a6ba1e24da35d\"" Aug 13 07:10:54.614721 systemd[1]: Started cri-containerd-9722d2d5456d7418002722d90412f4cd73a9a403708e3ec4e54a6ba1e24da35d.scope - libcontainer container 9722d2d5456d7418002722d90412f4cd73a9a403708e3ec4e54a6ba1e24da35d. Aug 13 07:10:54.698390 systemd[1]: run-netns-cni\x2d62bce08b\x2da9b7\x2dcb40\x2df3f2\x2d4685fe852013.mount: Deactivated successfully. Aug 13 07:10:54.698535 systemd[1]: run-netns-cni\x2dc9bebd6b\x2ddb0c\x2d654c\x2dc09a\x2de415f25c1c0b.mount: Deactivated successfully. Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.017 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0 goldmane-58fd7646b9- calico-system f33e7410-3fa9-4e29-b769-12773c11a6bf 1044 0 2025-08-13 07:10:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-n4sgd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali80c3bc74afa [] [] }} ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.017 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.146 [INFO][5005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" HandleID="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.146 [INFO][5005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" HandleID="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e6120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-n4sgd", "timestamp":"2025-08-13 07:10:54.145882532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.148 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.188 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.189 [INFO][5005] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.249 [INFO][5005] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.275 [INFO][5005] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.353 [INFO][5005] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.379 [INFO][5005] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.381 [INFO][5005] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.382 [INFO][5005] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.399 [INFO][5005] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.482 [INFO][5005] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.556 [INFO][5005] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.556 [INFO][5005] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" host="localhost" Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.556 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:10:54.745037 containerd[1471]: 2025-08-13 07:10:54.556 [INFO][5005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" HandleID="k8s-pod-network.25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:54.745688 containerd[1471]: 2025-08-13 07:10:54.559 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f33e7410-3fa9-4e29-b769-12773c11a6bf", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-n4sgd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80c3bc74afa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:54.745688 containerd[1471]: 2025-08-13 07:10:54.560 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:54.745688 containerd[1471]: 2025-08-13 07:10:54.560 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80c3bc74afa ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:54.745688 containerd[1471]: 2025-08-13 07:10:54.562 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:54.745688 containerd[1471]: 2025-08-13 07:10:54.562 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f33e7410-3fa9-4e29-b769-12773c11a6bf", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f", Pod:"goldmane-58fd7646b9-n4sgd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80c3bc74afa", MAC:"92:f5:6d:4c:84:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:10:54.745688 containerd[1471]: 2025-08-13 07:10:54.740 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f" Namespace="calico-system" Pod="goldmane-58fd7646b9-n4sgd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:10:54.790421 systemd-networkd[1402]: cali145fd3394bf: Gained IPv6LL Aug 13 07:10:54.834461 containerd[1471]: time="2025-08-13T07:10:54.834410440Z" level=info msg="StartContainer for \"9722d2d5456d7418002722d90412f4cd73a9a403708e3ec4e54a6ba1e24da35d\" returns successfully" Aug 13 07:10:54.863812 containerd[1471]: time="2025-08-13T07:10:54.863648380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:54.864650 containerd[1471]: time="2025-08-13T07:10:54.863845730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:54.865637 containerd[1471]: time="2025-08-13T07:10:54.865541952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:54.866181 containerd[1471]: time="2025-08-13T07:10:54.866002035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:54.893217 systemd[1]: Started cri-containerd-25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f.scope - libcontainer container 25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f. Aug 13 07:10:54.910350 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:10:54.939471 containerd[1471]: time="2025-08-13T07:10:54.939418509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n4sgd,Uid:f33e7410-3fa9-4e29-b769-12773c11a6bf,Namespace:calico-system,Attempt:1,} returns sandbox id \"25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f\"" Aug 13 07:10:55.050434 kubelet[2516]: E0813 07:10:55.050111 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:55.052658 kubelet[2516]: E0813 07:10:55.052161 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:55.204687 kubelet[2516]: I0813 07:10:55.204029 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g7nzk" podStartSLOduration=46.204001501 podStartE2EDuration="46.204001501s" podCreationTimestamp="2025-08-13 07:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:55.193025024 +0000 UTC m=+51.657931013" watchObservedRunningTime="2025-08-13 07:10:55.204001501 +0000 UTC m=+51.668907490" Aug 13 07:10:55.878166 systemd-networkd[1402]: cali00c6b201b0c: Gained IPv6LL Aug 13 07:10:56.053934 kubelet[2516]: E0813 07:10:56.053894 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:56.053934 kubelet[2516]: E0813 07:10:56.053937 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:56.262183 systemd-networkd[1402]: cali80c3bc74afa: Gained IPv6LL Aug 13 07:10:57.055674 kubelet[2516]: E0813 07:10:57.055631 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:10:58.462956 containerd[1471]: time="2025-08-13T07:10:58.462900772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:58.463755 containerd[1471]: time="2025-08-13T07:10:58.463682454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:10:58.465137 containerd[1471]: time="2025-08-13T07:10:58.465082992Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:58.467248 containerd[1471]: time="2025-08-13T07:10:58.467211540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:10:58.467837 containerd[1471]: time="2025-08-13T07:10:58.467806811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.721448685s" Aug 13 07:10:58.467869 containerd[1471]: time="2025-08-13T07:10:58.467837038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:10:58.469053 containerd[1471]: time="2025-08-13T07:10:58.469032338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:10:58.470443 containerd[1471]: time="2025-08-13T07:10:58.470389825Z" level=info msg="CreateContainer within sandbox \"e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:10:58.483702 containerd[1471]: time="2025-08-13T07:10:58.483596635Z" level=info msg="CreateContainer within sandbox \"e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bee6b066e0d22100b62381ac99572aade6fd6b54e7d0bf1ac0adf6b982769b23\"" Aug 13 07:10:58.490167 containerd[1471]: time="2025-08-13T07:10:58.490141890Z" level=info msg="StartContainer for \"bee6b066e0d22100b62381ac99572aade6fd6b54e7d0bf1ac0adf6b982769b23\"" Aug 13 07:10:58.524115 systemd[1]: Started cri-containerd-bee6b066e0d22100b62381ac99572aade6fd6b54e7d0bf1ac0adf6b982769b23.scope - libcontainer container bee6b066e0d22100b62381ac99572aade6fd6b54e7d0bf1ac0adf6b982769b23. Aug 13 07:10:58.565672 containerd[1471]: time="2025-08-13T07:10:58.565610794Z" level=info msg="StartContainer for \"bee6b066e0d22100b62381ac99572aade6fd6b54e7d0bf1ac0adf6b982769b23\" returns successfully" Aug 13 07:10:59.165525 systemd[1]: Started sshd@10-10.0.0.75:22-10.0.0.1:34488.service - OpenSSH per-connection server daemon (10.0.0.1:34488). Aug 13 07:10:59.217829 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 34488 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:10:59.219925 sshd[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:59.224542 systemd-logind[1459]: New session 11 of user core. Aug 13 07:10:59.242273 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:10:59.604879 sshd[5234]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:59.612547 systemd[1]: sshd@10-10.0.0.75:22-10.0.0.1:34488.service: Deactivated successfully. Aug 13 07:10:59.614735 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:10:59.617926 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:10:59.626416 systemd[1]: Started sshd@11-10.0.0.75:22-10.0.0.1:34490.service - OpenSSH per-connection server daemon (10.0.0.1:34490). Aug 13 07:10:59.627787 systemd-logind[1459]: Removed session 11. Aug 13 07:10:59.659553 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 34490 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:10:59.662660 sshd[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:59.671639 systemd-logind[1459]: New session 12 of user core. Aug 13 07:10:59.676243 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:10:59.858373 sshd[5255]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:59.866201 systemd[1]: sshd@11-10.0.0.75:22-10.0.0.1:34490.service: Deactivated successfully. Aug 13 07:10:59.871584 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:10:59.874039 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:10:59.884616 systemd[1]: Started sshd@12-10.0.0.75:22-10.0.0.1:34496.service - OpenSSH per-connection server daemon (10.0.0.1:34496). Aug 13 07:10:59.885696 systemd-logind[1459]: Removed session 12. Aug 13 07:10:59.921457 sshd[5270]: Accepted publickey for core from 10.0.0.1 port 34496 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:10:59.923652 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:59.927997 systemd-logind[1459]: New session 13 of user core. Aug 13 07:10:59.936005 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:10:59.967791 kubelet[2516]: I0813 07:10:59.967464 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd8575958-jsmst" podStartSLOduration=32.558419596 podStartE2EDuration="39.967445148s" podCreationTimestamp="2025-08-13 07:10:20 +0000 UTC" firstStartedPulling="2025-08-13 07:10:51.059801901 +0000 UTC m=+47.524707890" lastFinishedPulling="2025-08-13 07:10:58.468827453 +0000 UTC m=+54.933733442" observedRunningTime="2025-08-13 07:10:59.072204378 +0000 UTC m=+55.537110367" watchObservedRunningTime="2025-08-13 07:10:59.967445148 +0000 UTC m=+56.432351127" Aug 13 07:11:00.092356 sshd[5270]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:00.102467 systemd[1]: sshd@12-10.0.0.75:22-10.0.0.1:34496.service: Deactivated successfully. Aug 13 07:11:00.106820 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:11:00.109005 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:11:00.110630 systemd-logind[1459]: Removed session 13. Aug 13 07:11:00.180245 containerd[1471]: time="2025-08-13T07:11:00.179957301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:00.181390 containerd[1471]: time="2025-08-13T07:11:00.181176257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:11:00.182548 containerd[1471]: time="2025-08-13T07:11:00.182507583Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:00.184936 containerd[1471]: time="2025-08-13T07:11:00.184895579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:00.185573 containerd[1471]: time="2025-08-13T07:11:00.185512988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.716451805s" Aug 13 07:11:00.185573 containerd[1471]: time="2025-08-13T07:11:00.185549369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:11:00.186730 containerd[1471]: time="2025-08-13T07:11:00.186695795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:11:00.188673 containerd[1471]: time="2025-08-13T07:11:00.188553021Z" level=info msg="CreateContainer within sandbox \"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:11:00.212749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173641848.mount: Deactivated successfully. Aug 13 07:11:00.284683 containerd[1471]: time="2025-08-13T07:11:00.284605408Z" level=info msg="CreateContainer within sandbox \"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"28c3e312c1ed28b12d731c0ff8e150b6e3916db5da1ce2dfb33427ccde231739\"" Aug 13 07:11:00.285245 containerd[1471]: time="2025-08-13T07:11:00.285205523Z" level=info msg="StartContainer for \"28c3e312c1ed28b12d731c0ff8e150b6e3916db5da1ce2dfb33427ccde231739\"" Aug 13 07:11:00.323164 systemd[1]: Started cri-containerd-28c3e312c1ed28b12d731c0ff8e150b6e3916db5da1ce2dfb33427ccde231739.scope - libcontainer container 28c3e312c1ed28b12d731c0ff8e150b6e3916db5da1ce2dfb33427ccde231739. Aug 13 07:11:00.356848 containerd[1471]: time="2025-08-13T07:11:00.356795500Z" level=info msg="StartContainer for \"28c3e312c1ed28b12d731c0ff8e150b6e3916db5da1ce2dfb33427ccde231739\" returns successfully" Aug 13 07:11:03.729969 containerd[1471]: time="2025-08-13T07:11:03.729895283Z" level=info msg="StopPodSandbox for \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\"" Aug 13 07:11:03.809444 containerd[1471]: time="2025-08-13T07:11:03.809394399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:03.810463 containerd[1471]: time="2025-08-13T07:11:03.810412019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:11:03.811892 containerd[1471]: time="2025-08-13T07:11:03.811831768Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:03.814182 containerd[1471]: time="2025-08-13T07:11:03.814125990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:03.814852 containerd[1471]: time="2025-08-13T07:11:03.814805165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.628080254s" Aug 13 07:11:03.814852 containerd[1471]: time="2025-08-13T07:11:03.814847047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:11:03.815950 containerd[1471]: time="2025-08-13T07:11:03.815917079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:11:03.828738 containerd[1471]: time="2025-08-13T07:11:03.828624574Z" level=info msg="CreateContainer within sandbox \"dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:11:03.846848 containerd[1471]: time="2025-08-13T07:11:03.846799817Z" level=info msg="CreateContainer within sandbox \"dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"de8d626ed7f56f21ace468bf49a2a16aee762844917498fcb2f219843c4727de\"" Aug 13 07:11:03.847499 containerd[1471]: time="2025-08-13T07:11:03.847475937Z" level=info msg="StartContainer for \"de8d626ed7f56f21ace468bf49a2a16aee762844917498fcb2f219843c4727de\"" Aug 13 07:11:03.885152 systemd[1]: Started cri-containerd-de8d626ed7f56f21ace468bf49a2a16aee762844917498fcb2f219843c4727de.scope - libcontainer container de8d626ed7f56f21ace468bf49a2a16aee762844917498fcb2f219843c4727de. Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.844 [WARNING][5363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1735f9ec-98da-41a0-9a73-36a5bf3e9daf", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed", Pod:"coredns-7c65d6cfc9-gbvdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali145fd3394bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.845 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.845 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" iface="eth0" netns="" Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.845 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.845 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.871 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.871 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.872 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.878 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.878 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.879 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:03.887482 containerd[1471]: 2025-08-13 07:11:03.882 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.887997 containerd[1471]: time="2025-08-13T07:11:03.887554241Z" level=info msg="TearDown network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\" successfully" Aug 13 07:11:03.887997 containerd[1471]: time="2025-08-13T07:11:03.887602024Z" level=info msg="StopPodSandbox for \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\" returns successfully" Aug 13 07:11:03.888928 containerd[1471]: time="2025-08-13T07:11:03.888882924Z" level=info msg="RemovePodSandbox for \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\"" Aug 13 07:11:03.891316 containerd[1471]: time="2025-08-13T07:11:03.891266900Z" level=info msg="Forcibly stopping sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\"" Aug 13 07:11:03.932320 containerd[1471]: time="2025-08-13T07:11:03.932151235Z" level=info msg="StartContainer for \"de8d626ed7f56f21ace468bf49a2a16aee762844917498fcb2f219843c4727de\" returns successfully" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.931 [WARNING][5414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1735f9ec-98da-41a0-9a73-36a5bf3e9daf", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70a43eef9429cc5e8ad4817547f1a0ae9e743cb0081ae69386d5607b5231abed", Pod:"coredns-7c65d6cfc9-gbvdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali145fd3394bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.932 [INFO][5414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.932 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" iface="eth0" netns="" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.932 [INFO][5414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.932 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.958 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.958 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.958 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.966 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.966 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" HandleID="k8s-pod-network.5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Workload="localhost-k8s-coredns--7c65d6cfc9--gbvdn-eth0" Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.970 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:03.991164 containerd[1471]: 2025-08-13 07:11:03.974 [INFO][5414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f" Aug 13 07:11:03.991164 containerd[1471]: time="2025-08-13T07:11:03.991105940Z" level=info msg="TearDown network for sandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\" successfully" Aug 13 07:11:04.010332 containerd[1471]: time="2025-08-13T07:11:04.010267378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:04.010559 containerd[1471]: time="2025-08-13T07:11:04.010350569Z" level=info msg="RemovePodSandbox \"5edc6f7ef4102b0933b7057b37c0cab392f9dbf2b4601b9e50921e5a35a7ea1f\" returns successfully" Aug 13 07:11:04.012268 containerd[1471]: time="2025-08-13T07:11:04.012216809Z" level=info msg="StopPodSandbox for \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\"" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.104 [WARNING][5462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"89d9471b-7057-4239-b0e9-0c59341f2450", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134", Pod:"coredns-7c65d6cfc9-g7nzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00c6b201b0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.104 [INFO][5462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.104 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" iface="eth0" netns="" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.104 [INFO][5462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.104 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.133 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.133 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.133 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.138 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.138 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.140 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.146031 containerd[1471]: 2025-08-13 07:11:04.142 [INFO][5462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.146599 containerd[1471]: time="2025-08-13T07:11:04.146070677Z" level=info msg="TearDown network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\" successfully" Aug 13 07:11:04.146599 containerd[1471]: time="2025-08-13T07:11:04.146106897Z" level=info msg="StopPodSandbox for \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\" returns successfully" Aug 13 07:11:04.146761 containerd[1471]: time="2025-08-13T07:11:04.146718681Z" level=info msg="RemovePodSandbox for \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\"" Aug 13 07:11:04.146971 containerd[1471]: time="2025-08-13T07:11:04.146938165Z" level=info msg="Forcibly stopping sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\"" Aug 13 07:11:04.285806 kubelet[2516]: I0813 07:11:04.284920 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54495f4bd9-bw4q4" podStartSLOduration=29.696204089 podStartE2EDuration="41.28489411s" podCreationTimestamp="2025-08-13 07:10:23 +0000 UTC" firstStartedPulling="2025-08-13 07:10:52.227048001 +0000 UTC m=+48.691953990" lastFinishedPulling="2025-08-13 07:11:03.815738022 +0000 UTC m=+60.280644011" observedRunningTime="2025-08-13 07:11:04.104371084 +0000 UTC m=+60.569277073" watchObservedRunningTime="2025-08-13 07:11:04.28489411 +0000 UTC m=+60.749800099" Aug 13 07:11:04.299489 containerd[1471]: time="2025-08-13T07:11:04.299165890Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:04.300651 containerd[1471]: time="2025-08-13T07:11:04.300572171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:11:04.302930 containerd[1471]: time="2025-08-13T07:11:04.302884314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 486.93358ms" Aug 13 07:11:04.302930 containerd[1471]: time="2025-08-13T07:11:04.302928449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:11:04.304842 containerd[1471]: time="2025-08-13T07:11:04.304810100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:11:04.305897 containerd[1471]: time="2025-08-13T07:11:04.305849610Z" level=info msg="CreateContainer within sandbox \"e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:11:04.320885 containerd[1471]: time="2025-08-13T07:11:04.320796046Z" level=info msg="CreateContainer within sandbox \"e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d48f6027d60d4103517a7617e78738af11028ee875f94f697d0b6ea81e994497\"" Aug 13 07:11:04.322118 containerd[1471]: time="2025-08-13T07:11:04.322064982Z" level=info msg="StartContainer for \"d48f6027d60d4103517a7617e78738af11028ee875f94f697d0b6ea81e994497\"" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.288 [WARNING][5511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"89d9471b-7057-4239-b0e9-0c59341f2450", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"470760ab4c964b0879d38f2f00e80d4e7f87d7fe77c9ecf9e66854a7f9a14134", Pod:"coredns-7c65d6cfc9-g7nzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00c6b201b0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.288 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.288 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" iface="eth0" netns="" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.288 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.288 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.312 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.313 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.313 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.320 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.320 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" HandleID="k8s-pod-network.a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Workload="localhost-k8s-coredns--7c65d6cfc9--g7nzk-eth0" Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.322 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.330308 containerd[1471]: 2025-08-13 07:11:04.326 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf" Aug 13 07:11:04.332508 containerd[1471]: time="2025-08-13T07:11:04.330537994Z" level=info msg="TearDown network for sandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\" successfully" Aug 13 07:11:04.337301 containerd[1471]: time="2025-08-13T07:11:04.337242935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:04.337301 containerd[1471]: time="2025-08-13T07:11:04.337332949Z" level=info msg="RemovePodSandbox \"a5ef0c9e6bae38183ce6fd5007592c755d27790011f377ab9e1201eda80ffbdf\" returns successfully" Aug 13 07:11:04.338046 containerd[1471]: time="2025-08-13T07:11:04.338018896Z" level=info msg="StopPodSandbox for \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\"" Aug 13 07:11:04.369172 systemd[1]: Started cri-containerd-d48f6027d60d4103517a7617e78738af11028ee875f94f697d0b6ea81e994497.scope - libcontainer container d48f6027d60d4103517a7617e78738af11028ee875f94f697d0b6ea81e994497. Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.379 [WARNING][5556] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" WorkloadEndpoint="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.379 [INFO][5556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.379 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" iface="eth0" netns="" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.379 [INFO][5556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.379 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.401 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.402 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.402 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.408 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.408 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.409 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.415309 containerd[1471]: 2025-08-13 07:11:04.412 [INFO][5556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.415820 containerd[1471]: time="2025-08-13T07:11:04.415359031Z" level=info msg="TearDown network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\" successfully" Aug 13 07:11:04.415820 containerd[1471]: time="2025-08-13T07:11:04.415396213Z" level=info msg="StopPodSandbox for \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\" returns successfully" Aug 13 07:11:04.416815 containerd[1471]: time="2025-08-13T07:11:04.416683032Z" level=info msg="RemovePodSandbox for \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\"" Aug 13 07:11:04.416815 containerd[1471]: time="2025-08-13T07:11:04.416725224Z" level=info msg="Forcibly stopping sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\"" Aug 13 07:11:04.420583 containerd[1471]: time="2025-08-13T07:11:04.420284420Z" level=info msg="StartContainer for \"d48f6027d60d4103517a7617e78738af11028ee875f94f697d0b6ea81e994497\" returns successfully" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.456 [WARNING][5599] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" WorkloadEndpoint="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.457 [INFO][5599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.457 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" iface="eth0" netns="" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.457 [INFO][5599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.457 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.478 [INFO][5610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.478 [INFO][5610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.478 [INFO][5610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.483 [WARNING][5610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.483 [INFO][5610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" HandleID="k8s-pod-network.ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Workload="localhost-k8s-whisker--6dd6485567--mjgd8-eth0" Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.485 [INFO][5610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.491078 containerd[1471]: 2025-08-13 07:11:04.487 [INFO][5599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad" Aug 13 07:11:04.491489 containerd[1471]: time="2025-08-13T07:11:04.491118808Z" level=info msg="TearDown network for sandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\" successfully" Aug 13 07:11:04.495219 containerd[1471]: time="2025-08-13T07:11:04.495178703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:04.495393 containerd[1471]: time="2025-08-13T07:11:04.495237918Z" level=info msg="RemovePodSandbox \"ea36b120172d66864d31a3842a0283658336a1c224a21e8ab158166bb85dccad\" returns successfully" Aug 13 07:11:04.496074 containerd[1471]: time="2025-08-13T07:11:04.495764716Z" level=info msg="StopPodSandbox for \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\"" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.532 [WARNING][5630] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0", GenerateName:"calico-kube-controllers-54495f4bd9-", Namespace:"calico-system", SelfLink:"", UID:"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54495f4bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03", Pod:"calico-kube-controllers-54495f4bd9-bw4q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid33587a11a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.533 [INFO][5630] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.533 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" iface="eth0" netns="" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.533 [INFO][5630] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.533 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.554 [INFO][5639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.555 [INFO][5639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.555 [INFO][5639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.560 [WARNING][5639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.560 [INFO][5639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.563 [INFO][5639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.569872 containerd[1471]: 2025-08-13 07:11:04.566 [INFO][5630] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.569872 containerd[1471]: time="2025-08-13T07:11:04.569836778Z" level=info msg="TearDown network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\" successfully" Aug 13 07:11:04.569872 containerd[1471]: time="2025-08-13T07:11:04.569859323Z" level=info msg="StopPodSandbox for \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\" returns successfully" Aug 13 07:11:04.570594 containerd[1471]: time="2025-08-13T07:11:04.570390850Z" level=info msg="RemovePodSandbox for \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\"" Aug 13 07:11:04.570594 containerd[1471]: time="2025-08-13T07:11:04.570421560Z" level=info msg="Forcibly stopping sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\"" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.609 [WARNING][5657] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0", GenerateName:"calico-kube-controllers-54495f4bd9-", Namespace:"calico-system", SelfLink:"", UID:"0e76a2a9-0e2c-4ebb-b5c0-9180a06395b3", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54495f4bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dddf7d312e529845204c77e172608f1662d830e5c1fe64a01afec69a55390a03", Pod:"calico-kube-controllers-54495f4bd9-bw4q4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid33587a11a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.609 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.609 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" iface="eth0" netns="" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.609 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.609 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.630 [INFO][5666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.630 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.630 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.636 [WARNING][5666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.636 [INFO][5666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" HandleID="k8s-pod-network.b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Workload="localhost-k8s-calico--kube--controllers--54495f4bd9--bw4q4-eth0" Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.638 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.643918 containerd[1471]: 2025-08-13 07:11:04.640 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc" Aug 13 07:11:04.644418 containerd[1471]: time="2025-08-13T07:11:04.644018794Z" level=info msg="TearDown network for sandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\" successfully" Aug 13 07:11:04.659208 containerd[1471]: time="2025-08-13T07:11:04.659170626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:04.659281 containerd[1471]: time="2025-08-13T07:11:04.659238848Z" level=info msg="RemovePodSandbox \"b260e6afa1cba008c198149c7ae3e3f76c300eb647d97ffe0f1405b608f6a0cc\" returns successfully" Aug 13 07:11:04.659907 containerd[1471]: time="2025-08-13T07:11:04.659857356Z" level=info msg="StopPodSandbox for \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\"" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.693 [WARNING][5683] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bdd3ae4-af36-4635-bf89-707fda641eeb", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086", Pod:"calico-apiserver-7cd8575958-vfc6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief7d5c0295e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.693 [INFO][5683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.693 [INFO][5683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" iface="eth0" netns="" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.693 [INFO][5683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.693 [INFO][5683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.714 [INFO][5691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.715 [INFO][5691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.715 [INFO][5691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.720 [WARNING][5691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.720 [INFO][5691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.721 [INFO][5691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.728341 containerd[1471]: 2025-08-13 07:11:04.724 [INFO][5683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.730609 containerd[1471]: time="2025-08-13T07:11:04.728374331Z" level=info msg="TearDown network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\" successfully" Aug 13 07:11:04.730609 containerd[1471]: time="2025-08-13T07:11:04.728407936Z" level=info msg="StopPodSandbox for \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\" returns successfully" Aug 13 07:11:04.730609 containerd[1471]: time="2025-08-13T07:11:04.729669406Z" level=info msg="RemovePodSandbox for \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\"" Aug 13 07:11:04.730609 containerd[1471]: time="2025-08-13T07:11:04.729708281Z" level=info msg="Forcibly stopping sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\"" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.767 [WARNING][5709] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bdd3ae4-af36-4635-bf89-707fda641eeb", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6b85edcc5d0dfe0091f0aaea8d9d1ea44ccab83126b8c37c8613c986159c086", Pod:"calico-apiserver-7cd8575958-vfc6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief7d5c0295e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.767 [INFO][5709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.769 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" iface="eth0" netns="" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.769 [INFO][5709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.769 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.789 [INFO][5719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.790 [INFO][5719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.791 [INFO][5719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.796 [WARNING][5719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.796 [INFO][5719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" HandleID="k8s-pod-network.ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Workload="localhost-k8s-calico--apiserver--7cd8575958--vfc6n-eth0" Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.798 [INFO][5719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.804834 containerd[1471]: 2025-08-13 07:11:04.801 [INFO][5709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb" Aug 13 07:11:04.805517 containerd[1471]: time="2025-08-13T07:11:04.804883888Z" level=info msg="TearDown network for sandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\" successfully" Aug 13 07:11:04.809266 containerd[1471]: time="2025-08-13T07:11:04.809227091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:04.809314 containerd[1471]: time="2025-08-13T07:11:04.809289912Z" level=info msg="RemovePodSandbox \"ab18be6ff4c0684e3a2dca9adc8699ebea79224c487160772bed71fcae4a9feb\" returns successfully" Aug 13 07:11:04.809939 containerd[1471]: time="2025-08-13T07:11:04.809884794Z" level=info msg="StopPodSandbox for \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\"" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.844 [WARNING][5739] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6twvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56153e13-236a-410f-9ff5-c48bb048b643", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d", Pod:"csi-node-driver-6twvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9e7608e1db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.845 [INFO][5739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.845 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" iface="eth0" netns="" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.845 [INFO][5739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.845 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.868 [INFO][5748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.868 [INFO][5748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.868 [INFO][5748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.874 [WARNING][5748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.874 [INFO][5748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.875 [INFO][5748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:04.882250 containerd[1471]: 2025-08-13 07:11:04.879 [INFO][5739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:04.882250 containerd[1471]: time="2025-08-13T07:11:04.882214455Z" level=info msg="TearDown network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\" successfully" Aug 13 07:11:04.882250 containerd[1471]: time="2025-08-13T07:11:04.882242660Z" level=info msg="StopPodSandbox for \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\" returns successfully" Aug 13 07:11:04.883182 containerd[1471]: time="2025-08-13T07:11:04.883158521Z" level=info msg="RemovePodSandbox for \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\"" Aug 13 07:11:04.883234 containerd[1471]: time="2025-08-13T07:11:04.883184832Z" level=info msg="Forcibly stopping sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\"" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.021 [WARNING][5766] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6twvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56153e13-236a-410f-9ff5-c48bb048b643", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d", Pod:"csi-node-driver-6twvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9e7608e1db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.021 [INFO][5766] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.021 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" iface="eth0" netns="" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.021 [INFO][5766] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.021 [INFO][5766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.043 [INFO][5775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.043 [INFO][5775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.043 [INFO][5775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.075 [WARNING][5775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.075 [INFO][5775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" HandleID="k8s-pod-network.9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Workload="localhost-k8s-csi--node--driver--6twvn-eth0" Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.077 [INFO][5775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:05.082564 containerd[1471]: 2025-08-13 07:11:05.079 [INFO][5766] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4" Aug 13 07:11:05.083015 containerd[1471]: time="2025-08-13T07:11:05.082605122Z" level=info msg="TearDown network for sandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\" successfully" Aug 13 07:11:05.112348 systemd[1]: Started sshd@13-10.0.0.75:22-10.0.0.1:34508.service - OpenSSH per-connection server daemon (10.0.0.1:34508). Aug 13 07:11:05.161153 containerd[1471]: time="2025-08-13T07:11:05.161012644Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:05.161153 containerd[1471]: time="2025-08-13T07:11:05.161121935Z" level=info msg="RemovePodSandbox \"9db7ec66788c9ce312ade1cee8150c14d57284a81819a4d3453e3b2fe2bc99b4\" returns successfully" Aug 13 07:11:05.161624 containerd[1471]: time="2025-08-13T07:11:05.161578558Z" level=info msg="StopPodSandbox for \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\"" Aug 13 07:11:05.190153 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 34508 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:05.192461 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:05.197403 systemd-logind[1459]: New session 14 of user core. Aug 13 07:11:05.205156 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.233 [WARNING][5796] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e4cc1e4-02e4-465a-8bb4-0feae995f17b", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e", Pod:"calico-apiserver-7cd8575958-jsmst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali497bfe1a3db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.234 [INFO][5796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.234 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" iface="eth0" netns="" Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.234 [INFO][5796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.234 [INFO][5796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.256 [INFO][5807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.256 [INFO][5807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.256 [INFO][5807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.268 [WARNING][5807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.268 [INFO][5807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.269 [INFO][5807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:05.275094 containerd[1471]: 2025-08-13 07:11:05.272 [INFO][5796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.275597 containerd[1471]: time="2025-08-13T07:11:05.275145793Z" level=info msg="TearDown network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\" successfully" Aug 13 07:11:05.275597 containerd[1471]: time="2025-08-13T07:11:05.275173216Z" level=info msg="StopPodSandbox for \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\" returns successfully" Aug 13 07:11:05.282555 containerd[1471]: time="2025-08-13T07:11:05.275761403Z" level=info msg="RemovePodSandbox for \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\"" Aug 13 07:11:05.282614 containerd[1471]: time="2025-08-13T07:11:05.282562569Z" level=info msg="Forcibly stopping sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\"" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.329 [WARNING][5829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0", GenerateName:"calico-apiserver-7cd8575958-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e4cc1e4-02e4-465a-8bb4-0feae995f17b", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd8575958", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e18df1326526af4e74f7f7b60bfe433cad8064100f72431e1d204d8c11a1e14e", Pod:"calico-apiserver-7cd8575958-jsmst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali497bfe1a3db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.330 [INFO][5829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.330 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" iface="eth0" netns="" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.330 [INFO][5829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.330 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.354 [INFO][5842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.354 [INFO][5842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.354 [INFO][5842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.460 [WARNING][5842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.460 [INFO][5842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" HandleID="k8s-pod-network.2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Workload="localhost-k8s-calico--apiserver--7cd8575958--jsmst-eth0" Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.462 [INFO][5842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:05.468303 containerd[1471]: 2025-08-13 07:11:05.465 [INFO][5829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e" Aug 13 07:11:05.468303 containerd[1471]: time="2025-08-13T07:11:05.468284220Z" level=info msg="TearDown network for sandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\" successfully" Aug 13 07:11:05.520526 containerd[1471]: time="2025-08-13T07:11:05.520460085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:05.520702 containerd[1471]: time="2025-08-13T07:11:05.520549639Z" level=info msg="RemovePodSandbox \"2813b3e1df05d1cd5cae9bd646bfd0e7c734b6f626dcca60c1352b8f6661019e\" returns successfully" Aug 13 07:11:05.522891 containerd[1471]: time="2025-08-13T07:11:05.521431072Z" level=info msg="StopPodSandbox for \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\"" Aug 13 07:11:05.582037 sshd[5784]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:05.589972 systemd[1]: sshd@13-10.0.0.75:22-10.0.0.1:34508.service: Deactivated successfully. Aug 13 07:11:05.593526 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:11:05.595312 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:11:05.596800 systemd-logind[1459]: Removed session 14. Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.563 [WARNING][5861] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f33e7410-3fa9-4e29-b769-12773c11a6bf", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f", Pod:"goldmane-58fd7646b9-n4sgd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80c3bc74afa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.563 [INFO][5861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.563 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" iface="eth0" netns="" Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.563 [INFO][5861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.563 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.590 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.591 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.591 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.597 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.597 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.599 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:05.605685 containerd[1471]: 2025-08-13 07:11:05.602 [INFO][5861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.606254 containerd[1471]: time="2025-08-13T07:11:05.605773045Z" level=info msg="TearDown network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\" successfully" Aug 13 07:11:05.606254 containerd[1471]: time="2025-08-13T07:11:05.605814035Z" level=info msg="StopPodSandbox for \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\" returns successfully" Aug 13 07:11:05.606493 containerd[1471]: time="2025-08-13T07:11:05.606461646Z" level=info msg="RemovePodSandbox for \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\"" Aug 13 07:11:05.606530 containerd[1471]: time="2025-08-13T07:11:05.606501193Z" level=info msg="Forcibly stopping sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\"" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.652 [WARNING][5890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f33e7410-3fa9-4e29-b769-12773c11a6bf", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 10, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f", Pod:"goldmane-58fd7646b9-n4sgd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80c3bc74afa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.652 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.652 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" iface="eth0" netns="" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.652 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.652 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.676 [INFO][5899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.677 [INFO][5899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.677 [INFO][5899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.684 [WARNING][5899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.684 [INFO][5899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" HandleID="k8s-pod-network.4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Workload="localhost-k8s-goldmane--58fd7646b9--n4sgd-eth0" Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.686 [INFO][5899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:11:05.693066 containerd[1471]: 2025-08-13 07:11:05.689 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db" Aug 13 07:11:05.693517 containerd[1471]: time="2025-08-13T07:11:05.693107783Z" level=info msg="TearDown network for sandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\" successfully" Aug 13 07:11:05.697761 containerd[1471]: time="2025-08-13T07:11:05.697712994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:11:05.697834 containerd[1471]: time="2025-08-13T07:11:05.697802007Z" level=info msg="RemovePodSandbox \"4e66817bbba7f666d9522b8d5b8196e76577c4df178fb94ae7c87092d536c9db\" returns successfully" Aug 13 07:11:05.948450 kubelet[2516]: I0813 07:11:05.948374 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd8575958-vfc6n" podStartSLOduration=35.061910943 podStartE2EDuration="45.948350146s" podCreationTimestamp="2025-08-13 07:10:20 +0000 UTC" firstStartedPulling="2025-08-13 07:10:53.417475895 +0000 UTC m=+49.882381884" lastFinishedPulling="2025-08-13 07:11:04.303915098 +0000 UTC m=+60.768821087" observedRunningTime="2025-08-13 07:11:05.178384925 +0000 UTC m=+61.643290914" watchObservedRunningTime="2025-08-13 07:11:05.948350146 +0000 UTC m=+62.413256135" Aug 13 07:11:07.697440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422868146.mount: Deactivated successfully. Aug 13 07:11:08.294332 containerd[1471]: time="2025-08-13T07:11:08.294268168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:08.309528 containerd[1471]: time="2025-08-13T07:11:08.309447910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:11:08.315930 containerd[1471]: time="2025-08-13T07:11:08.315869253Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:08.319054 containerd[1471]: time="2025-08-13T07:11:08.319014177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:08.320071 containerd[1471]: time="2025-08-13T07:11:08.319967847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.015123622s" Aug 13 07:11:08.320132 containerd[1471]: time="2025-08-13T07:11:08.320073260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:11:08.321561 containerd[1471]: time="2025-08-13T07:11:08.321533806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:11:08.322675 containerd[1471]: time="2025-08-13T07:11:08.322616524Z" level=info msg="CreateContainer within sandbox \"25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:11:08.335485 containerd[1471]: time="2025-08-13T07:11:08.335430384Z" level=info msg="CreateContainer within sandbox \"25f2c25e208991831e2eb6f66e78140c9c5bc33d97348fed9e138e4363ae9a2f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b56f1f217c18d459de5ae8083bf8cae6833c1d88e6d3e634ac102c2c16b833d3\"" Aug 13 07:11:08.336859 containerd[1471]: time="2025-08-13T07:11:08.336122640Z" level=info msg="StartContainer for \"b56f1f217c18d459de5ae8083bf8cae6833c1d88e6d3e634ac102c2c16b833d3\"" Aug 13 07:11:08.409118 systemd[1]: Started cri-containerd-b56f1f217c18d459de5ae8083bf8cae6833c1d88e6d3e634ac102c2c16b833d3.scope - libcontainer container b56f1f217c18d459de5ae8083bf8cae6833c1d88e6d3e634ac102c2c16b833d3. Aug 13 07:11:08.542345 containerd[1471]: time="2025-08-13T07:11:08.542284691Z" level=info msg="StartContainer for \"b56f1f217c18d459de5ae8083bf8cae6833c1d88e6d3e634ac102c2c16b833d3\" returns successfully" Aug 13 07:11:09.130769 kubelet[2516]: I0813 07:11:09.130684 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-n4sgd" podStartSLOduration=32.750664593 podStartE2EDuration="46.130664222s" podCreationTimestamp="2025-08-13 07:10:23 +0000 UTC" firstStartedPulling="2025-08-13 07:10:54.940963206 +0000 UTC m=+51.405869195" lastFinishedPulling="2025-08-13 07:11:08.320962805 +0000 UTC m=+64.785868824" observedRunningTime="2025-08-13 07:11:09.129047987 +0000 UTC m=+65.593953976" watchObservedRunningTime="2025-08-13 07:11:09.130664222 +0000 UTC m=+65.595570211" Aug 13 07:11:10.601570 systemd[1]: Started sshd@14-10.0.0.75:22-10.0.0.1:35654.service - OpenSSH per-connection server daemon (10.0.0.1:35654). Aug 13 07:11:10.789332 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 35654 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:10.790250 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:10.797203 systemd-logind[1459]: New session 15 of user core. Aug 13 07:11:10.802211 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:11:10.873473 containerd[1471]: time="2025-08-13T07:11:10.873344305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:10.874944 containerd[1471]: time="2025-08-13T07:11:10.874688564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:11:10.876414 containerd[1471]: time="2025-08-13T07:11:10.876380843Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:10.879393 containerd[1471]: time="2025-08-13T07:11:10.879328097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:10.880588 containerd[1471]: time="2025-08-13T07:11:10.880528459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.558960136s" Aug 13 07:11:10.880588 containerd[1471]: time="2025-08-13T07:11:10.880569048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:11:10.886909 containerd[1471]: time="2025-08-13T07:11:10.885705678Z" level=info msg="CreateContainer within sandbox \"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:11:10.901449 containerd[1471]: time="2025-08-13T07:11:10.901405219Z" level=info msg="CreateContainer within sandbox \"889dbc84d26ec6c8f6ef29a35e2bd561db0d9643d533535e926400b0c0d3004d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"00364d04c94c15892081685696108578531a1909b1fdc3f8898bb1ec84bb1745\"" Aug 13 07:11:10.902447 containerd[1471]: time="2025-08-13T07:11:10.902390857Z" level=info msg="StartContainer for \"00364d04c94c15892081685696108578531a1909b1fdc3f8898bb1ec84bb1745\"" Aug 13 07:11:10.937121 systemd[1]: Started cri-containerd-00364d04c94c15892081685696108578531a1909b1fdc3f8898bb1ec84bb1745.scope - libcontainer container 00364d04c94c15892081685696108578531a1909b1fdc3f8898bb1ec84bb1745. Aug 13 07:11:11.069890 containerd[1471]: time="2025-08-13T07:11:11.069510993Z" level=info msg="StartContainer for \"00364d04c94c15892081685696108578531a1909b1fdc3f8898bb1ec84bb1745\" returns successfully" Aug 13 07:11:11.132770 kubelet[2516]: I0813 07:11:11.132588 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6twvn" podStartSLOduration=29.339034125 podStartE2EDuration="48.132570814s" podCreationTimestamp="2025-08-13 07:10:23 +0000 UTC" firstStartedPulling="2025-08-13 07:10:52.090314909 +0000 UTC m=+48.555220898" lastFinishedPulling="2025-08-13 07:11:10.883851598 +0000 UTC m=+67.348757587" observedRunningTime="2025-08-13 07:11:11.132142129 +0000 UTC m=+67.597048118" watchObservedRunningTime="2025-08-13 07:11:11.132570814 +0000 UTC m=+67.597476803" Aug 13 07:11:11.183054 sshd[6041]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:11.187554 systemd[1]: sshd@14-10.0.0.75:22-10.0.0.1:35654.service: Deactivated successfully. Aug 13 07:11:11.189891 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:11:11.192577 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:11:11.193514 systemd-logind[1459]: Removed session 15. Aug 13 07:11:11.931314 kubelet[2516]: I0813 07:11:11.931263 2516 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:11:11.931314 kubelet[2516]: I0813 07:11:11.931311 2516 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:11:16.202339 systemd[1]: Started sshd@15-10.0.0.75:22-10.0.0.1:35668.service - OpenSSH per-connection server daemon (10.0.0.1:35668). Aug 13 07:11:16.267800 sshd[6120]: Accepted publickey for core from 10.0.0.1 port 35668 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:16.269360 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:16.274326 systemd-logind[1459]: New session 16 of user core. Aug 13 07:11:16.282208 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:11:16.439001 sshd[6120]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:16.443947 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:11:16.444445 systemd[1]: sshd@15-10.0.0.75:22-10.0.0.1:35668.service: Deactivated successfully. Aug 13 07:11:16.446795 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:11:16.447761 systemd-logind[1459]: Removed session 16. Aug 13 07:11:21.451577 systemd[1]: Started sshd@16-10.0.0.75:22-10.0.0.1:51350.service - OpenSSH per-connection server daemon (10.0.0.1:51350). Aug 13 07:11:21.491836 sshd[6134]: Accepted publickey for core from 10.0.0.1 port 51350 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:21.493599 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:21.498344 systemd-logind[1459]: New session 17 of user core. Aug 13 07:11:21.506107 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:11:21.620002 sshd[6134]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:21.629103 systemd[1]: sshd@16-10.0.0.75:22-10.0.0.1:51350.service: Deactivated successfully. Aug 13 07:11:21.631276 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:11:21.632957 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:11:21.641349 systemd[1]: Started sshd@17-10.0.0.75:22-10.0.0.1:51354.service - OpenSSH per-connection server daemon (10.0.0.1:51354). Aug 13 07:11:21.642682 systemd-logind[1459]: Removed session 17. Aug 13 07:11:21.689006 sshd[6149]: Accepted publickey for core from 10.0.0.1 port 51354 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:21.691428 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:21.702538 systemd-logind[1459]: New session 18 of user core. Aug 13 07:11:21.710155 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:11:22.010567 sshd[6149]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:22.022317 systemd[1]: sshd@17-10.0.0.75:22-10.0.0.1:51354.service: Deactivated successfully. Aug 13 07:11:22.024590 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:11:22.027098 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:11:22.034632 systemd[1]: Started sshd@18-10.0.0.75:22-10.0.0.1:51362.service - OpenSSH per-connection server daemon (10.0.0.1:51362). Aug 13 07:11:22.035936 systemd-logind[1459]: Removed session 18. Aug 13 07:11:22.134597 sshd[6162]: Accepted publickey for core from 10.0.0.1 port 51362 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:22.136621 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:22.141738 systemd-logind[1459]: New session 19 of user core. Aug 13 07:11:22.147221 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:11:23.741316 sshd[6162]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:23.752901 systemd[1]: sshd@18-10.0.0.75:22-10.0.0.1:51362.service: Deactivated successfully. Aug 13 07:11:23.759379 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:11:23.761550 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:11:23.771127 systemd[1]: Started sshd@19-10.0.0.75:22-10.0.0.1:51376.service - OpenSSH per-connection server daemon (10.0.0.1:51376). Aug 13 07:11:23.774531 systemd-logind[1459]: Removed session 19. Aug 13 07:11:23.851176 sshd[6183]: Accepted publickey for core from 10.0.0.1 port 51376 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:23.856013 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:23.863949 systemd-logind[1459]: New session 20 of user core. Aug 13 07:11:23.875181 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:11:24.357390 sshd[6183]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:24.366320 systemd[1]: sshd@19-10.0.0.75:22-10.0.0.1:51376.service: Deactivated successfully. Aug 13 07:11:24.368776 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:11:24.371375 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:11:24.376354 systemd[1]: Started sshd@20-10.0.0.75:22-10.0.0.1:51378.service - OpenSSH per-connection server daemon (10.0.0.1:51378). Aug 13 07:11:24.377532 systemd-logind[1459]: Removed session 20. Aug 13 07:11:24.420792 sshd[6196]: Accepted publickey for core from 10.0.0.1 port 51378 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:24.422534 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:24.427372 systemd-logind[1459]: New session 21 of user core. Aug 13 07:11:24.437111 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:11:24.582579 sshd[6196]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:24.587168 systemd[1]: sshd@20-10.0.0.75:22-10.0.0.1:51378.service: Deactivated successfully. Aug 13 07:11:24.589191 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:11:24.589860 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:11:24.590940 systemd-logind[1459]: Removed session 21. Aug 13 07:11:29.600118 systemd[1]: Started sshd@21-10.0.0.75:22-10.0.0.1:44258.service - OpenSSH per-connection server daemon (10.0.0.1:44258). Aug 13 07:11:29.639461 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 44258 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:29.641685 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:29.648086 systemd-logind[1459]: New session 22 of user core. Aug 13 07:11:29.651664 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:11:29.777392 sshd[6219]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:29.781676 systemd[1]: sshd@21-10.0.0.75:22-10.0.0.1:44258.service: Deactivated successfully. Aug 13 07:11:29.784352 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:11:29.786210 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:11:29.788088 systemd-logind[1459]: Removed session 22. Aug 13 07:11:33.748410 kubelet[2516]: E0813 07:11:33.748352 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:11:34.739153 kubelet[2516]: E0813 07:11:34.739098 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:11:34.739701 kubelet[2516]: E0813 07:11:34.739634 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:11:34.792625 systemd[1]: Started sshd@22-10.0.0.75:22-10.0.0.1:44272.service - OpenSSH per-connection server daemon (10.0.0.1:44272). Aug 13 07:11:34.839810 sshd[6255]: Accepted publickey for core from 10.0.0.1 port 44272 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:34.841606 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:34.846142 systemd-logind[1459]: New session 23 of user core. Aug 13 07:11:34.853104 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:11:34.971809 sshd[6255]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:34.976889 systemd[1]: sshd@22-10.0.0.75:22-10.0.0.1:44272.service: Deactivated successfully. Aug 13 07:11:34.979330 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:11:34.980056 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:11:34.981438 systemd-logind[1459]: Removed session 23. Aug 13 07:11:37.739523 kubelet[2516]: E0813 07:11:37.739469 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:11:39.990100 systemd[1]: Started sshd@23-10.0.0.75:22-10.0.0.1:34120.service - OpenSSH per-connection server daemon (10.0.0.1:34120). Aug 13 07:11:40.036182 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 34120 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:11:40.038441 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:40.043519 systemd-logind[1459]: New session 24 of user core. Aug 13 07:11:40.052128 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:11:40.172802 sshd[6313]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:40.177822 systemd[1]: sshd@23-10.0.0.75:22-10.0.0.1:34120.service: Deactivated successfully. Aug 13 07:11:40.180201 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:11:40.180842 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:11:40.181767 systemd-logind[1459]: Removed session 24.