May 17 00:33:32.958130 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:33:32.958162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:33:32.958173 kernel: BIOS-provided physical RAM map: May 17 00:33:32.958180 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:33:32.958186 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 17 00:33:32.958192 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 17 00:33:32.958199 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 17 00:33:32.958205 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 17 00:33:32.958212 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 17 00:33:32.958218 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 17 00:33:32.958227 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 17 00:33:32.958233 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 17 00:33:32.958239 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 17 00:33:32.958246 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 17 00:33:32.958266 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 17 00:33:32.958273 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 17 00:33:32.958282 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 17 00:33:32.958289 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 17 00:33:32.958296 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 17 00:33:32.958302 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:33:32.958309 kernel: NX (Execute Disable) protection: active May 17 00:33:32.958316 kernel: APIC: Static calls initialized May 17 00:33:32.958322 kernel: efi: EFI v2.7 by EDK II May 17 00:33:32.958329 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 17 00:33:32.958336 kernel: SMBIOS 2.8 present. May 17 00:33:32.958342 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 17 00:33:32.958349 kernel: Hypervisor detected: KVM May 17 00:33:32.958358 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:33:32.958365 kernel: kvm-clock: using sched offset of 3976657773 cycles May 17 00:33:32.958372 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:33:32.958379 kernel: tsc: Detected 2794.748 MHz processor May 17 00:33:32.958386 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:33:32.958393 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:33:32.958400 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 17 00:33:32.958407 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 00:33:32.958414 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:33:32.958423 kernel: Using GB pages for direct mapping May 17 00:33:32.958430 kernel: Secure boot disabled May 17 00:33:32.958437 kernel: ACPI: Early table checksum verification disabled May 17 00:33:32.958444 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 17 00:33:32.958454 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 17 00:33:32.958462 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:32.958469 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:32.958478 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 17 00:33:32.958485 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:32.958492 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:32.958500 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:32.958507 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:32.958514 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:33:32.958521 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 17 00:33:32.958530 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 17 00:33:32.958537 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 17 00:33:32.958544 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 17 00:33:32.958551 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 17 00:33:32.958558 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 17 00:33:32.958565 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 17 00:33:32.958572 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 17 00:33:32.958579 kernel: No NUMA configuration found May 17 00:33:32.958586 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 17 00:33:32.958596 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 17 00:33:32.958603 kernel: Zone ranges: May 17 00:33:32.958610 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:33:32.958617 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 17 00:33:32.958624 kernel: Normal empty May 17 00:33:32.958632 kernel: Movable zone start for each node May 17 00:33:32.958639 kernel: Early memory node ranges May 17 00:33:32.958646 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:33:32.958653 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 17 00:33:32.958660 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 17 00:33:32.958669 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 17 00:33:32.958676 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 17 00:33:32.958683 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 17 00:33:32.958691 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 17 00:33:32.958698 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:33:32.958705 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:33:32.958712 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 17 00:33:32.958719 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:33:32.958726 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 17 00:33:32.958735 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:33:32.958742 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 17 00:33:32.958749 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:33:32.958756 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:33:32.958763 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:33:32.958771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:33:32.958778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:33:32.958785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:33:32.958792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:33:32.958799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:33:32.958809 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:33:32.958816 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:33:32.958823 kernel: TSC deadline timer available May 17 00:33:32.958830 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:33:32.958838 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:33:32.958845 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:33:32.958852 kernel: kvm-guest: setup PV sched yield May 17 00:33:32.958859 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 17 00:33:32.958866 kernel: Booting paravirtualized kernel on KVM May 17 00:33:32.958876 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:33:32.958883 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 17 00:33:32.958891 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 17 00:33:32.958898 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 17 00:33:32.958905 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:33:32.958912 kernel: kvm-guest: PV spinlocks enabled May 17 00:33:32.958919 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:33:32.958927 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:33:32.958938 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:33:32.958945 kernel: random: crng init done May 17 00:33:32.958952 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:33:32.958959 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:33:32.958967 kernel: Fallback order for Node 0: 0 May 17 00:33:32.958974 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 17 00:33:32.958981 kernel: Policy zone: DMA32 May 17 00:33:32.958988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:33:32.958995 kernel: Memory: 2400596K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 166144K reserved, 0K cma-reserved) May 17 00:33:32.959005 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:33:32.959012 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:33:32.959019 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:33:32.959026 kernel: Dynamic Preempt: voluntary May 17 00:33:32.959042 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:33:32.959052 kernel: rcu: RCU event tracing is enabled. May 17 00:33:32.959059 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:33:32.959067 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:33:32.959074 kernel: Rude variant of Tasks RCU enabled. May 17 00:33:32.959082 kernel: Tracing variant of Tasks RCU enabled. May 17 00:33:32.959089 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:33:32.959097 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:33:32.959106 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:33:32.959114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:33:32.959122 kernel: Console: colour dummy device 80x25 May 17 00:33:32.959129 kernel: printk: console [ttyS0] enabled May 17 00:33:32.959136 kernel: ACPI: Core revision 20230628 May 17 00:33:32.959158 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:33:32.959165 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:33:32.959173 kernel: x2apic enabled May 17 00:33:32.959180 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:33:32.959188 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:33:32.959196 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:33:32.959203 kernel: kvm-guest: setup PV IPIs May 17 00:33:32.959210 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:33:32.959218 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:33:32.959228 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:33:32.959235 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:33:32.959243 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:33:32.959250 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:33:32.959269 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:33:32.959276 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:33:32.959284 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:33:32.959291 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:33:32.959299 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:33:32.959309 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:33:32.959317 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:33:32.959325 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:33:32.959333 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:33:32.959340 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:33:32.959348 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:33:32.959355 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:33:32.959363 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:33:32.959373 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:33:32.959380 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:33:32.959388 kernel: Freeing SMP alternatives memory: 32K May 17 00:33:32.959396 kernel: pid_max: default: 32768 minimum: 301 May 17 00:33:32.959403 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:33:32.959411 kernel: landlock: Up and running. May 17 00:33:32.959418 kernel: SELinux: Initializing. May 17 00:33:32.959425 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:33:32.959433 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:33:32.959443 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:33:32.959450 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:33:32.959458 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:33:32.959466 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:33:32.959473 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:33:32.959480 kernel: ... version: 0 May 17 00:33:32.959488 kernel: ... bit width: 48 May 17 00:33:32.959495 kernel: ... generic registers: 6 May 17 00:33:32.959502 kernel: ... value mask: 0000ffffffffffff May 17 00:33:32.959512 kernel: ... max period: 00007fffffffffff May 17 00:33:32.959520 kernel: ... fixed-purpose events: 0 May 17 00:33:32.959527 kernel: ... event mask: 000000000000003f May 17 00:33:32.959534 kernel: signal: max sigframe size: 1776 May 17 00:33:32.959542 kernel: rcu: Hierarchical SRCU implementation. May 17 00:33:32.959549 kernel: rcu: Max phase no-delay instances is 400. May 17 00:33:32.959557 kernel: smp: Bringing up secondary CPUs ... May 17 00:33:32.959564 kernel: smpboot: x86: Booting SMP configuration: May 17 00:33:32.959571 kernel: .... node #0, CPUs: #1 #2 #3 May 17 00:33:32.959581 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:33:32.959588 kernel: smpboot: Max logical packages: 1 May 17 00:33:32.959596 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:33:32.959603 kernel: devtmpfs: initialized May 17 00:33:32.959610 kernel: x86/mm: Memory block size: 128MB May 17 00:33:32.959618 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 17 00:33:32.959625 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 17 00:33:32.959633 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 17 00:33:32.959640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 17 00:33:32.959650 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 17 00:33:32.959658 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:33:32.959665 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:33:32.959673 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:33:32.959680 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:33:32.959687 kernel: audit: initializing netlink subsys (disabled) May 17 00:33:32.959695 kernel: audit: type=2000 audit(1747442012.389:1): state=initialized audit_enabled=0 res=1 May 17 00:33:32.959702 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:33:32.959710 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:33:32.959719 kernel: cpuidle: using governor menu May 17 00:33:32.959727 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:33:32.959734 kernel: dca service started, version 1.12.1 May 17 00:33:32.959742 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:33:32.959749 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:33:32.959757 kernel: PCI: Using configuration type 1 for base access May 17 00:33:32.959764 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:33:32.959772 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:33:32.959779 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:33:32.959789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:33:32.959796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:33:32.959804 kernel: ACPI: Added _OSI(Module Device) May 17 00:33:32.959811 kernel: ACPI: Added _OSI(Processor Device) May 17 00:33:32.959818 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:33:32.959826 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:33:32.959833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:33:32.959841 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:33:32.959848 kernel: ACPI: Interpreter enabled May 17 00:33:32.959858 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:33:32.959865 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:33:32.959873 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:33:32.959880 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:33:32.959887 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:33:32.959895 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:33:32.960094 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:33:32.960235 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:33:32.960387 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:33:32.960397 kernel: PCI host bridge to bus 0000:00 May 17 00:33:32.960520 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:33:32.960631 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:33:32.960741 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:33:32.960875 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:33:32.961024 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:33:32.961154 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 17 00:33:32.961280 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:33:32.961417 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:33:32.961546 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:33:32.961668 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 17 00:33:32.961787 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 17 00:33:32.961934 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:33:32.962076 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 17 00:33:32.962206 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:33:32.962349 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:33:32.962471 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 17 00:33:32.962591 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 17 00:33:32.962713 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 17 00:33:32.962845 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:33:32.962967 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 17 00:33:32.963087 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 17 00:33:32.963217 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 17 00:33:32.963359 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:33:32.963483 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 17 00:33:32.963606 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 17 00:33:32.963727 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 17 00:33:32.963852 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 17 00:33:32.964002 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:33:32.964163 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:33:32.964342 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:33:32.964475 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 17 00:33:32.964599 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 17 00:33:32.964749 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:33:32.964905 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 17 00:33:32.964920 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:33:32.964931 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:33:32.964941 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:33:32.964950 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:33:32.964960 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:33:32.964975 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:33:32.964984 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:33:32.964994 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:33:32.965004 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:33:32.965013 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:33:32.965023 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:33:32.965033 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:33:32.965042 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:33:32.965052 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:33:32.965065 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:33:32.965075 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:33:32.965085 kernel: iommu: Default domain type: Translated May 17 00:33:32.965094 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:33:32.965104 kernel: efivars: Registered efivars operations May 17 00:33:32.965114 kernel: PCI: Using ACPI for IRQ routing May 17 00:33:32.965124 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:33:32.965133 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 17 00:33:32.965154 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 17 00:33:32.965167 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 17 00:33:32.965178 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 17 00:33:32.965356 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:33:32.965521 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:33:32.965691 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:33:32.965708 kernel: vgaarb: loaded May 17 00:33:32.965720 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:33:32.965729 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:33:32.965740 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:33:32.965755 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:33:32.965765 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:33:32.965776 kernel: pnp: PnP ACPI init May 17 00:33:32.965949 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:33:32.965967 kernel: pnp: PnP ACPI: found 6 devices May 17 00:33:32.965977 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:33:32.965994 kernel: NET: Registered PF_INET protocol family May 17 00:33:32.966005 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:33:32.966021 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:33:32.966032 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:33:32.966042 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:33:32.966052 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:33:32.966061 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:33:32.966071 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:33:32.966081 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:33:32.966091 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:33:32.966101 kernel: NET: Registered PF_XDP protocol family May 17 00:33:32.966301 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 17 00:33:32.966469 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 17 00:33:32.966625 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:33:32.966802 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:33:32.966970 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:33:32.967118 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:33:32.967295 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:33:32.967445 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 17 00:33:32.967466 kernel: PCI: CLS 0 bytes, default 64 May 17 00:33:32.967477 kernel: Initialise system trusted keyrings May 17 00:33:32.967487 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:33:32.967498 kernel: Key type asymmetric registered May 17 00:33:32.967508 kernel: Asymmetric key parser 'x509' registered May 17 00:33:32.967518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:33:32.967529 kernel: io scheduler mq-deadline registered May 17 00:33:32.967539 kernel: io scheduler kyber registered May 17 00:33:32.967550 kernel: io scheduler bfq registered May 17 00:33:32.967564 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:33:32.967575 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:33:32.967586 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:33:32.967597 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:33:32.967607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:33:32.967617 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:33:32.967627 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:33:32.967637 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:33:32.967647 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:33:32.967830 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:33:32.967854 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:33:32.968030 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:33:32.968195 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:33:32 UTC (1747442012) May 17 00:33:32.968357 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:33:32.968373 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:33:32.968382 kernel: efifb: probing for efifb May 17 00:33:32.968392 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 17 00:33:32.968408 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 17 00:33:32.968418 kernel: efifb: scrolling: redraw May 17 00:33:32.968428 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 17 00:33:32.968438 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:33:32.968448 kernel: fb0: EFI VGA frame buffer device May 17 00:33:32.968480 kernel: pstore: Using crash dump compression: deflate May 17 00:33:32.968493 kernel: pstore: Registered efi_pstore as persistent store backend May 17 00:33:32.968503 kernel: NET: Registered PF_INET6 protocol family May 17 00:33:32.968512 kernel: Segment Routing with IPv6 May 17 00:33:32.968525 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:33:32.968534 kernel: NET: Registered PF_PACKET protocol family May 17 00:33:32.968544 kernel: Key type dns_resolver registered May 17 00:33:32.968553 kernel: IPI shorthand broadcast: enabled May 17 00:33:32.968563 kernel: sched_clock: Marking stable (767004373, 122573213)->(927015636, -37438050) May 17 00:33:32.968573 kernel: registered taskstats version 1 May 17 00:33:32.968583 kernel: Loading compiled-in X.509 certificates May 17 00:33:32.968594 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:33:32.968604 kernel: Key type .fscrypt registered May 17 00:33:32.968618 kernel: Key type fscrypt-provisioning registered May 17 00:33:32.968628 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:33:32.968639 kernel: ima: Allocated hash algorithm: sha1 May 17 00:33:32.968649 kernel: ima: No architecture policies found May 17 00:33:32.968660 kernel: clk: Disabling unused clocks May 17 00:33:32.968671 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:33:32.968681 kernel: Write protecting the kernel read-only data: 36864k May 17 00:33:32.968692 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:33:32.968706 kernel: Run /init as init process May 17 00:33:32.968717 kernel: with arguments: May 17 00:33:32.968727 kernel: /init May 17 00:33:32.968737 kernel: with environment: May 17 00:33:32.968747 kernel: HOME=/ May 17 00:33:32.968758 kernel: TERM=linux May 17 00:33:32.968768 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:33:32.968782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:33:32.968799 systemd[1]: Detected virtualization kvm. May 17 00:33:32.968810 systemd[1]: Detected architecture x86-64. May 17 00:33:32.968821 systemd[1]: Running in initrd. May 17 00:33:32.968833 systemd[1]: No hostname configured, using default hostname. May 17 00:33:32.968849 systemd[1]: Hostname set to . May 17 00:33:32.968867 systemd[1]: Initializing machine ID from VM UUID. May 17 00:33:32.968881 systemd[1]: Queued start job for default target initrd.target. May 17 00:33:32.968897 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:33:32.968913 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:33:32.968928 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:33:32.968943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:33:32.968959 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:33:32.968978 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:33:32.969000 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:33:32.969015 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:33:32.969031 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:33:32.969046 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:33:32.969060 systemd[1]: Reached target paths.target - Path Units. May 17 00:33:32.969074 systemd[1]: Reached target slices.target - Slice Units. May 17 00:33:32.969085 systemd[1]: Reached target swap.target - Swaps. May 17 00:33:32.969101 systemd[1]: Reached target timers.target - Timer Units. May 17 00:33:32.969113 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:33:32.969125 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:33:32.969137 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:33:32.969159 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:33:32.969171 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:33:32.969183 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:33:32.969195 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:33:32.969211 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:33:32.969222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:33:32.969233 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:33:32.969246 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:33:32.969274 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:33:32.969287 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:33:32.969300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:33:32.969313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:33:32.969325 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:33:32.969342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:33:32.969355 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:33:32.969394 systemd-journald[193]: Collecting audit messages is disabled. May 17 00:33:32.969426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:33:32.969439 systemd-journald[193]: Journal started May 17 00:33:32.969466 systemd-journald[193]: Runtime Journal (/run/log/journal/96804ce82624480d880e180c9c281d7f) is 6.0M, max 48.3M, 42.2M free. May 17 00:33:33.007038 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:33:33.009409 systemd-modules-load[194]: Inserted module 'overlay' May 17 00:33:33.009526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:33:33.014022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:33:33.020243 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:33:33.023734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:33:33.026707 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:33:33.093721 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:33:33.095533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:33:33.097037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:33:33.099284 kernel: Bridge firewalling registered May 17 00:33:33.098890 systemd-modules-load[194]: Inserted module 'br_netfilter' May 17 00:33:33.100479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:33:33.165667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:33:33.186403 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:33:33.189388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:33:33.200294 dracut-cmdline[223]: dracut-dracut-053 May 17 00:33:33.203073 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:33:33.226624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:33:33.230966 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:33:33.267665 systemd-resolved[238]: Positive Trust Anchors: May 17 00:33:33.267683 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:33:33.267721 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:33:33.270598 systemd-resolved[238]: Defaulting to hostname 'linux'. May 17 00:33:33.271745 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:33:33.278766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:33:33.351309 kernel: SCSI subsystem initialized May 17 00:33:33.363285 kernel: Loading iSCSI transport class v2.0-870. May 17 00:33:33.391308 kernel: iscsi: registered transport (tcp) May 17 00:33:33.417302 kernel: iscsi: registered transport (qla4xxx) May 17 00:33:33.417373 kernel: QLogic iSCSI HBA Driver May 17 00:33:33.472635 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:33:33.502522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:33:33.535970 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:33:33.536021 kernel: device-mapper: uevent: version 1.0.3 May 17 00:33:33.536035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:33:33.579322 kernel: raid6: avx2x4 gen() 26447 MB/s May 17 00:33:33.596308 kernel: raid6: avx2x2 gen() 28995 MB/s May 17 00:33:33.613462 kernel: raid6: avx2x1 gen() 25305 MB/s May 17 00:33:33.613529 kernel: raid6: using algorithm avx2x2 gen() 28995 MB/s May 17 00:33:33.632302 kernel: raid6: .... xor() 18654 MB/s, rmw enabled May 17 00:33:33.632382 kernel: raid6: using avx2x2 recovery algorithm May 17 00:33:33.653309 kernel: xor: automatically using best checksumming function avx May 17 00:33:33.830311 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:33:33.844680 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:33:33.851465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:33:33.868622 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 17 00:33:33.873593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:33:33.884509 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:33:33.897520 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation May 17 00:33:33.935566 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:33:33.949450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:33:34.018562 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:33:34.029442 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:33:34.043524 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:33:34.046595 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:33:34.049940 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:33:34.052506 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:33:34.061336 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:33:34.061388 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 17 00:33:34.061782 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:33:34.079803 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:33:34.096209 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:33:34.103688 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:33:34.103718 kernel: AES CTR mode by8 optimization enabled May 17 00:33:34.103729 kernel: libata version 3.00 loaded. May 17 00:33:34.103745 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:33:34.103756 kernel: GPT:9289727 != 19775487 May 17 00:33:34.103766 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:33:34.103775 kernel: GPT:9289727 != 19775487 May 17 00:33:34.103785 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:33:34.103794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:34.107661 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:33:34.107901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:33:34.112274 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:33:34.114017 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:33:34.120662 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:33:34.120693 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:33:34.120936 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:33:34.114103 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:33:34.114497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:33:34.129813 kernel: scsi host0: ahci May 17 00:33:34.130053 kernel: scsi host1: ahci May 17 00:33:34.131269 kernel: scsi host2: ahci May 17 00:33:34.124101 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:33:34.135030 kernel: scsi host3: ahci May 17 00:33:34.137343 kernel: scsi host4: ahci May 17 00:33:34.137585 kernel: scsi host5: ahci May 17 00:33:34.141287 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (467) May 17 00:33:34.141335 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 17 00:33:34.141351 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 17 00:33:34.143504 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 17 00:33:34.143539 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) May 17 00:33:34.143554 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 17 00:33:34.145415 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 17 00:33:34.145451 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 17 00:33:34.148637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:33:34.163987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:33:34.181125 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:33:34.189747 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:33:34.198359 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:33:34.201299 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:33:34.209472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:33:34.221468 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:33:34.225096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:33:34.229956 disk-uuid[553]: Primary Header is updated. May 17 00:33:34.229956 disk-uuid[553]: Secondary Entries is updated. May 17 00:33:34.229956 disk-uuid[553]: Secondary Header is updated. May 17 00:33:34.233273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:34.239292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:34.244289 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:34.246736 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:33:34.453734 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:33:34.453807 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:33:34.453819 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:33:34.455286 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:33:34.456289 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:33:34.457290 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:33:34.458301 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:33:34.458327 kernel: ata3.00: applying bridge limits May 17 00:33:34.459312 kernel: ata3.00: configured for UDMA/100 May 17 00:33:34.461278 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:33:34.506284 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:33:34.506596 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:33:34.527306 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:33:35.245282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:35.245795 disk-uuid[556]: The operation has completed successfully. May 17 00:33:35.275146 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:33:35.275275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:33:35.296384 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:33:35.301893 sh[594]: Success May 17 00:33:35.314272 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:33:35.346235 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:33:35.363558 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:33:35.368014 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:33:35.377757 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:33:35.377793 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:35.377804 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:33:35.378894 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:33:35.379723 kernel: BTRFS info (device dm-0): using free space tree May 17 00:33:35.384311 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:33:35.386510 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:33:35.404388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:33:35.407189 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:33:35.416318 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:33:35.416353 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:35.416366 kernel: BTRFS info (device vda6): using free space tree May 17 00:33:35.419280 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:33:35.427990 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:33:35.430041 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:33:35.440353 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:33:35.447604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:33:35.497889 ignition[687]: Ignition 2.19.0 May 17 00:33:35.497902 ignition[687]: Stage: fetch-offline May 17 00:33:35.497937 ignition[687]: no configs at "/usr/lib/ignition/base.d" May 17 00:33:35.497946 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:35.498030 ignition[687]: parsed url from cmdline: "" May 17 00:33:35.498034 ignition[687]: no config URL provided May 17 00:33:35.498039 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:33:35.498047 ignition[687]: no config at "/usr/lib/ignition/user.ign" May 17 00:33:35.498084 ignition[687]: op(1): [started] loading QEMU firmware config module May 17 00:33:35.498090 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:33:35.507090 ignition[687]: op(1): [finished] loading QEMU firmware config module May 17 00:33:35.538578 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:33:35.550696 ignition[687]: parsing config with SHA512: 4176967e2f54d3e3262b649496941ba330ac26631d19afd3ebe3cce517223fe1f1425dbbc55136c5bf0c395bb6ec1cc48192b4629ae4b4f5cbd75cfaf1dfb1a7 May 17 00:33:35.552408 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:33:35.554537 unknown[687]: fetched base config from "system" May 17 00:33:35.554944 ignition[687]: fetch-offline: fetch-offline passed May 17 00:33:35.554547 unknown[687]: fetched user config from "qemu" May 17 00:33:35.555006 ignition[687]: Ignition finished successfully May 17 00:33:35.557406 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:33:35.585033 systemd-networkd[784]: lo: Link UP May 17 00:33:35.585044 systemd-networkd[784]: lo: Gained carrier May 17 00:33:35.588189 systemd-networkd[784]: Enumeration completed May 17 00:33:35.588342 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:33:35.588684 systemd[1]: Reached target network.target - Network. May 17 00:33:35.588944 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:33:35.594696 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:33:35.594705 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:33:35.597383 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:33:35.600712 systemd-networkd[784]: eth0: Link UP May 17 00:33:35.600721 systemd-networkd[784]: eth0: Gained carrier May 17 00:33:35.600728 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:33:35.611514 ignition[787]: Ignition 2.19.0 May 17 00:33:35.611526 ignition[787]: Stage: kargs May 17 00:33:35.611698 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 17 00:33:35.611711 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:35.612641 ignition[787]: kargs: kargs passed May 17 00:33:35.612683 ignition[787]: Ignition finished successfully May 17 00:33:35.619015 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:33:35.620320 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:33:35.632393 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:33:35.644794 ignition[796]: Ignition 2.19.0 May 17 00:33:35.644806 ignition[796]: Stage: disks May 17 00:33:35.644971 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 17 00:33:35.644985 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:35.648897 ignition[796]: disks: disks passed May 17 00:33:35.648954 ignition[796]: Ignition finished successfully May 17 00:33:35.652621 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:33:35.652952 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:33:35.654779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:33:35.658287 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:33:35.658525 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:33:35.658856 systemd[1]: Reached target basic.target - Basic System. May 17 00:33:35.674424 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:33:35.688993 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:33:35.695932 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:33:35.705388 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:33:35.795289 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:33:35.796113 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:33:35.796801 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:33:35.806501 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:33:35.808888 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:33:35.810652 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:33:35.816507 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) May 17 00:33:35.810697 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:33:35.823503 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:33:35.823523 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:35.823543 kernel: BTRFS info (device vda6): using free space tree May 17 00:33:35.823557 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:33:35.810721 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:33:35.817773 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:33:35.825019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:33:35.829173 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:33:35.868507 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:33:35.872846 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory May 17 00:33:35.876892 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:33:35.882161 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:33:35.969672 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:33:35.984452 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:33:35.986311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:33:35.993681 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:33:36.016082 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:33:36.020120 ignition[927]: INFO : Ignition 2.19.0 May 17 00:33:36.020120 ignition[927]: INFO : Stage: mount May 17 00:33:36.021913 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:33:36.021913 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:36.021913 ignition[927]: INFO : mount: mount passed May 17 00:33:36.021913 ignition[927]: INFO : Ignition finished successfully May 17 00:33:36.028100 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:33:36.035451 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:33:36.377473 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:33:36.394560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:33:36.402288 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) May 17 00:33:36.404839 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:33:36.404867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:36.404882 kernel: BTRFS info (device vda6): using free space tree May 17 00:33:36.409277 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:33:36.410595 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:33:36.434808 ignition[959]: INFO : Ignition 2.19.0 May 17 00:33:36.434808 ignition[959]: INFO : Stage: files May 17 00:33:36.437189 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:33:36.437189 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:36.437189 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 17 00:33:36.441555 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:33:36.441555 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:33:36.441555 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:33:36.441555 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:33:36.448341 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:33:36.448341 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:33:36.448341 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:33:36.443267 unknown[959]: wrote ssh authorized keys file for user: core May 17 00:33:36.525814 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:33:36.637067 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:33:36.637067 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:33:36.641246 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:33:36.643189 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:36.645266 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:36.647204 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:36.649272 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:36.651229 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:36.653292 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:36.655544 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:36.657536 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:36.657536 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:33:36.657536 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:33:36.657536 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:33:36.657536 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:33:36.942399 systemd-networkd[784]: eth0: Gained IPv6LL May 17 00:33:37.403373 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:33:37.781109 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:33:37.781109 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:33:37.785437 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:33:37.807046 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:33:37.812524 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:33:37.814344 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:33:37.814344 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 17 00:33:37.814344 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:33:37.814344 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:33:37.814344 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:33:37.814344 ignition[959]: INFO : files: files passed May 17 00:33:37.814344 ignition[959]: INFO : Ignition finished successfully May 17 00:33:37.826805 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:33:37.837462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:33:37.840732 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:33:37.843754 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:33:37.844897 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:33:37.850742 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory May 17 00:33:37.854937 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:33:37.854937 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:33:37.858205 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:33:37.861219 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:33:37.863897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:33:37.872406 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:33:37.896585 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:33:37.896710 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:33:37.899295 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:33:37.900459 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:33:37.902509 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:33:37.913380 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:33:37.928229 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:33:37.936427 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:33:37.945759 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:33:37.945937 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:33:37.948232 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:33:37.950450 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:33:37.950599 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:33:37.955443 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:33:37.955624 systemd[1]: Stopped target basic.target - Basic System. May 17 00:33:37.957562 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:33:37.959329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:33:37.961436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:33:37.963669 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:33:37.965821 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:33:37.967802 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:33:37.970179 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:33:37.972133 systemd[1]: Stopped target swap.target - Swaps. May 17 00:33:37.974048 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:33:37.974195 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:33:37.978601 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:33:37.978775 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:33:37.980786 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:33:37.983881 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:33:37.986639 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:33:37.986779 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:33:37.989753 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:33:37.989901 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:33:37.992239 systemd[1]: Stopped target paths.target - Path Units. May 17 00:33:37.993349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:33:37.998312 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:33:37.998509 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:33:38.001133 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:33:38.002858 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:33:38.002984 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:33:38.004725 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:33:38.004843 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:33:38.007591 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:33:38.007741 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:33:38.008606 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:33:38.008745 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:33:38.019426 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:33:38.019514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:33:38.019665 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:33:38.023744 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:33:38.024045 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:33:38.024210 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:33:38.026378 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:33:38.026513 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:33:38.036127 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:33:38.036298 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:33:38.047810 ignition[1012]: INFO : Ignition 2.19.0 May 17 00:33:38.047810 ignition[1012]: INFO : Stage: umount May 17 00:33:38.049863 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:33:38.049863 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:38.049863 ignition[1012]: INFO : umount: umount passed May 17 00:33:38.049863 ignition[1012]: INFO : Ignition finished successfully May 17 00:33:38.050500 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:33:38.050642 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:33:38.052596 systemd[1]: Stopped target network.target - Network. May 17 00:33:38.054222 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:33:38.054313 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:33:38.056382 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:33:38.056437 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:33:38.059210 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:33:38.059318 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:33:38.061429 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:33:38.061492 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:33:38.063888 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:33:38.066074 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:33:38.069618 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:33:38.070301 systemd-networkd[784]: eth0: DHCPv6 lease lost May 17 00:33:38.070345 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:33:38.070507 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:33:38.074471 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:33:38.074579 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:33:38.077312 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:33:38.077477 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:33:38.080055 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:33:38.080130 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:33:38.086332 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:33:38.087750 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:33:38.087821 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:33:38.090184 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:33:38.090234 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:33:38.092380 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:33:38.092428 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:33:38.092584 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:33:38.101486 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:33:38.101635 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:33:38.115024 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:33:38.115206 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:33:38.117443 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:33:38.117489 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:33:38.119603 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:33:38.119641 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:33:38.121637 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:33:38.121687 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:33:38.123797 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:33:38.123846 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:33:38.145171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:33:38.145224 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:33:38.157402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:33:38.159668 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:33:38.159738 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:33:38.163299 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:33:38.163352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:33:38.166941 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:33:38.168178 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:33:38.209331 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:33:38.210474 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:33:38.212776 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:33:38.215209 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:33:38.216326 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:33:38.233399 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:33:38.239832 systemd[1]: Switching root. May 17 00:33:38.268424 systemd-journald[193]: Journal stopped May 17 00:33:39.548972 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 17 00:33:39.549049 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:33:39.549064 kernel: SELinux: policy capability open_perms=1 May 17 00:33:39.549081 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:33:39.549094 kernel: SELinux: policy capability always_check_network=0 May 17 00:33:39.549108 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:33:39.549123 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:33:39.549138 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:33:39.549157 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:33:39.549169 kernel: audit: type=1403 audit(1747442018.764:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:33:39.549187 systemd[1]: Successfully loaded SELinux policy in 41.831ms. May 17 00:33:39.549210 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.628ms. May 17 00:33:39.549227 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:33:39.549239 systemd[1]: Detected virtualization kvm. May 17 00:33:39.549251 systemd[1]: Detected architecture x86-64. May 17 00:33:39.549274 systemd[1]: Detected first boot. May 17 00:33:39.549286 systemd[1]: Initializing machine ID from VM UUID. May 17 00:33:39.549298 zram_generator::config[1056]: No configuration found. May 17 00:33:39.549311 systemd[1]: Populated /etc with preset unit settings. May 17 00:33:39.549323 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:33:39.549338 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:33:39.549354 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:33:39.549372 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:33:39.549384 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:33:39.549398 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:33:39.549410 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:33:39.549422 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:33:39.549434 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:33:39.549449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:33:39.549461 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:33:39.549473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:33:39.549485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:33:39.549497 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:33:39.549509 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:33:39.549521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:33:39.549533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:33:39.549545 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:33:39.549560 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:33:39.549572 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:33:39.549584 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:33:39.549596 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:33:39.549608 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:33:39.549620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:33:39.549632 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:33:39.549644 systemd[1]: Reached target slices.target - Slice Units. May 17 00:33:39.549660 systemd[1]: Reached target swap.target - Swaps. May 17 00:33:39.549672 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:33:39.549684 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:33:39.549696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:33:39.549708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:33:39.550062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:33:39.550075 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:33:39.550087 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:33:39.550099 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:33:39.550113 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:33:39.550125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:39.550137 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:33:39.550149 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:33:39.550161 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:33:39.550174 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:33:39.550186 systemd[1]: Reached target machines.target - Containers. May 17 00:33:39.550198 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:33:39.550210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:33:39.550225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:33:39.550237 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:33:39.550249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:33:39.550279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:33:39.550291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:33:39.550305 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:33:39.550318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:33:39.550333 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:33:39.550350 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:33:39.550362 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:33:39.550373 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:33:39.550385 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:33:39.550397 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:33:39.550409 kernel: fuse: init (API version 7.39) May 17 00:33:39.550421 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:33:39.550432 kernel: loop: module loaded May 17 00:33:39.550444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:33:39.550459 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:33:39.550471 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:33:39.550483 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:33:39.550495 systemd[1]: Stopped verity-setup.service. May 17 00:33:39.550507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:39.550519 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:33:39.550551 systemd-journald[1126]: Collecting audit messages is disabled. May 17 00:33:39.550581 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:33:39.550593 kernel: ACPI: bus type drm_connector registered May 17 00:33:39.550605 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:33:39.550617 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:33:39.550630 systemd-journald[1126]: Journal started May 17 00:33:39.550654 systemd-journald[1126]: Runtime Journal (/run/log/journal/96804ce82624480d880e180c9c281d7f) is 6.0M, max 48.3M, 42.2M free. May 17 00:33:39.291142 systemd[1]: Queued start job for default target multi-user.target. May 17 00:33:39.309120 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:33:39.309682 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:33:39.554291 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:33:39.555194 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:33:39.556662 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:33:39.558124 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:33:39.559800 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:33:39.561641 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:33:39.561840 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:33:39.563692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:39.563888 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:33:39.565639 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:33:39.565845 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:33:39.567550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:39.567749 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:33:39.569593 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:33:39.569790 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:33:39.571409 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:33:39.571600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:33:39.573167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:33:39.574717 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:33:39.576438 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:33:39.592192 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:33:39.601418 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:33:39.604119 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:33:39.605344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:33:39.605381 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:33:39.607522 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:33:39.609911 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:33:39.613490 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:33:39.614929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:33:39.617559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:33:39.621958 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:33:39.624473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:33:39.626938 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:33:39.628479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:33:39.630145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:33:39.636848 systemd-journald[1126]: Time spent on flushing to /var/log/journal/96804ce82624480d880e180c9c281d7f is 25.586ms for 988 entries. May 17 00:33:39.636848 systemd-journald[1126]: System Journal (/var/log/journal/96804ce82624480d880e180c9c281d7f) is 8.0M, max 195.6M, 187.6M free. May 17 00:33:39.683498 systemd-journald[1126]: Received client request to flush runtime journal. May 17 00:33:39.683571 kernel: loop0: detected capacity change from 0 to 140768 May 17 00:33:39.637537 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:33:39.644823 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:33:39.648184 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:33:39.653023 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:33:39.654674 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:33:39.667225 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:33:39.670358 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:33:39.674983 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:33:39.685486 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:33:39.689581 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:33:39.693845 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:33:39.697291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:33:39.704709 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:33:39.706304 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:33:39.708533 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:33:39.709187 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:33:39.716828 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:33:39.723566 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:33:39.730293 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:33:39.750898 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 17 00:33:39.750924 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 17 00:33:39.757745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:33:39.770353 kernel: loop2: detected capacity change from 0 to 224512 May 17 00:33:39.805298 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:33:39.821297 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:33:39.834495 kernel: loop5: detected capacity change from 0 to 224512 May 17 00:33:39.843295 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 17 00:33:39.844770 (sd-merge)[1196]: Merged extensions into '/usr'. May 17 00:33:39.849855 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:33:39.849876 systemd[1]: Reloading... May 17 00:33:39.915351 zram_generator::config[1222]: No configuration found. May 17 00:33:39.973979 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:33:40.074085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:33:40.142568 systemd[1]: Reloading finished in 292 ms. May 17 00:33:40.180184 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:33:40.181985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:33:40.196495 systemd[1]: Starting ensure-sysext.service... May 17 00:33:40.198790 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:33:40.207712 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... May 17 00:33:40.207729 systemd[1]: Reloading... May 17 00:33:40.231345 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:33:40.231871 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:33:40.233322 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:33:40.233799 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 17 00:33:40.233907 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 17 00:33:40.239413 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:33:40.239431 systemd-tmpfiles[1260]: Skipping /boot May 17 00:33:40.257957 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:33:40.257976 systemd-tmpfiles[1260]: Skipping /boot May 17 00:33:40.268290 zram_generator::config[1290]: No configuration found. May 17 00:33:40.386568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:33:40.450508 systemd[1]: Reloading finished in 242 ms. May 17 00:33:40.473842 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:33:40.487869 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:33:40.498890 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:33:40.501991 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:33:40.504787 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:33:40.509520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:33:40.513675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:33:40.519014 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:33:40.524335 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:40.524517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:33:40.526185 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:33:40.532333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:33:40.536064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:33:40.537330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:33:40.541052 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:33:40.542399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:40.543486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:40.543662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:33:40.545673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:40.545989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:33:40.548165 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:33:40.548529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:33:40.556653 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:33:40.562323 systemd-udevd[1332]: Using default interface naming scheme 'v255'. May 17 00:33:40.565495 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:33:40.565703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:33:40.567463 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:33:40.569197 augenrules[1355]: No rules May 17 00:33:40.572799 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:33:40.574932 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:33:40.581197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:40.581552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:33:40.589517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:33:40.593345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:33:40.596713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:33:40.597899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:33:40.598014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:40.598713 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:33:40.602663 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:33:40.604614 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:33:40.606590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:40.607413 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:33:40.609951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:40.610124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:33:40.614791 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:33:40.616689 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:33:40.617022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:33:40.631503 systemd[1]: Finished ensure-sysext.service. May 17 00:33:40.635673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:40.635947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:33:40.646513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:33:40.653105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:33:40.666508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:33:40.667981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:33:40.670960 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:33:40.674893 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:33:40.676520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:33:40.676559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:40.677520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:40.678364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:33:40.680042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:40.680421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:33:40.685251 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:33:40.685497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:33:40.695126 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:33:40.696720 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:33:40.697485 systemd-resolved[1330]: Positive Trust Anchors: May 17 00:33:40.706492 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1394) May 17 00:33:40.697579 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:33:40.698053 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:33:40.698101 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:33:40.702388 systemd-resolved[1330]: Defaulting to hostname 'linux'. May 17 00:33:40.706747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:33:40.708587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:33:40.730931 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:33:40.737603 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:33:40.740539 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:33:40.745278 kernel: ACPI: button: Power Button [PWRF] May 17 00:33:40.755898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:33:40.772027 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 17 00:33:40.772439 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:33:40.775064 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:33:40.779336 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:33:40.779551 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:33:40.781454 systemd-networkd[1401]: lo: Link UP May 17 00:33:40.781464 systemd-networkd[1401]: lo: Gained carrier May 17 00:33:40.783117 systemd-networkd[1401]: Enumeration completed May 17 00:33:40.783523 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:33:40.783527 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:33:40.784500 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:33:40.784701 systemd-networkd[1401]: eth0: Link UP May 17 00:33:40.784711 systemd-networkd[1401]: eth0: Gained carrier May 17 00:33:40.784723 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:33:40.785860 systemd[1]: Reached target network.target - Network. May 17 00:33:40.796505 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:33:40.798497 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:33:40.800112 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:33:40.800696 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. May 17 00:33:40.801610 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:33:41.477694 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:33:41.477754 systemd-timesyncd[1403]: Initial clock synchronization to Sat 2025-05-17 00:33:41.476784 UTC. May 17 00:33:41.477852 systemd-resolved[1330]: Clock change detected. Flushing caches. May 17 00:33:41.509559 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:33:41.509597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:33:41.585085 kernel: kvm_amd: TSC scaling supported May 17 00:33:41.585194 kernel: kvm_amd: Nested Virtualization enabled May 17 00:33:41.585215 kernel: kvm_amd: Nested Paging enabled May 17 00:33:41.585246 kernel: kvm_amd: LBR virtualization supported May 17 00:33:41.586875 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 17 00:33:41.586927 kernel: kvm_amd: Virtual GIF supported May 17 00:33:41.609568 kernel: EDAC MC: Ver: 3.0.0 May 17 00:33:41.619610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:33:41.643742 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:33:41.659694 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:33:41.669433 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:33:41.697867 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:33:41.700476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:33:41.701675 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:33:41.702953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:33:41.704259 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:33:41.705811 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:33:41.707005 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:33:41.708282 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:33:41.709558 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:33:41.709594 systemd[1]: Reached target paths.target - Path Units. May 17 00:33:41.710518 systemd[1]: Reached target timers.target - Timer Units. May 17 00:33:41.712403 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:33:41.715708 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:33:41.727288 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:33:41.729992 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:33:41.731782 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:33:41.732978 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:33:41.733979 systemd[1]: Reached target basic.target - Basic System. May 17 00:33:41.734949 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:33:41.734975 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:33:41.736053 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:33:41.738351 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:33:41.743674 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:33:41.746899 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:33:41.747761 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:33:41.748866 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:33:41.750792 jq[1438]: false May 17 00:33:41.751737 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:33:41.757186 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:33:41.761728 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:33:41.765729 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:33:41.772389 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:33:41.773956 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:33:41.774377 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:33:41.775728 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:33:41.778598 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:33:41.779405 dbus-daemon[1437]: [system] SELinux support is enabled May 17 00:33:41.780472 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:33:41.783846 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:33:41.788317 extend-filesystems[1439]: Found loop3 May 17 00:33:41.788317 extend-filesystems[1439]: Found loop4 May 17 00:33:41.788317 extend-filesystems[1439]: Found loop5 May 17 00:33:41.788317 extend-filesystems[1439]: Found sr0 May 17 00:33:41.788317 extend-filesystems[1439]: Found vda May 17 00:33:41.788317 extend-filesystems[1439]: Found vda1 May 17 00:33:41.788317 extend-filesystems[1439]: Found vda2 May 17 00:33:41.788317 extend-filesystems[1439]: Found vda3 May 17 00:33:41.788317 extend-filesystems[1439]: Found usr May 17 00:33:41.788317 extend-filesystems[1439]: Found vda4 May 17 00:33:41.788317 extend-filesystems[1439]: Found vda6 May 17 00:33:41.788317 extend-filesystems[1439]: Found vda7 May 17 00:33:41.788317 extend-filesystems[1439]: Found vda9 May 17 00:33:41.788317 extend-filesystems[1439]: Checking size of /dev/vda9 May 17 00:33:41.788412 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:33:41.825380 extend-filesystems[1439]: Resized partition /dev/vda9 May 17 00:33:41.831557 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:33:41.788664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:33:41.831747 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) May 17 00:33:41.839449 update_engine[1451]: I20250517 00:33:41.807631 1451 main.cc:92] Flatcar Update Engine starting May 17 00:33:41.839449 update_engine[1451]: I20250517 00:33:41.808955 1451 update_check_scheduler.cc:74] Next update check in 7m27s May 17 00:33:41.788993 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:33:41.840404 jq[1452]: true May 17 00:33:41.790229 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:33:41.799369 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:33:41.799651 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:33:41.812593 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:33:41.841169 jq[1464]: true May 17 00:33:41.825806 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:33:41.825832 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:33:41.832908 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:33:41.832925 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:33:41.857024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1376) May 17 00:33:41.859549 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:33:41.861556 tar[1456]: linux-amd64/LICENSE May 17 00:33:41.868669 systemd[1]: Started update-engine.service - Update Engine. May 17 00:33:41.881367 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:33:41.881396 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:33:41.887382 tar[1456]: linux-amd64/helm May 17 00:33:41.881876 systemd-logind[1447]: New seat seat0. May 17 00:33:41.882795 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:33:41.887159 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:33:41.893210 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:33:41.893210 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:33:41.893210 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:33:41.900609 extend-filesystems[1439]: Resized filesystem in /dev/vda9 May 17 00:33:41.893905 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:33:41.894120 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:33:41.920643 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:33:41.924420 bash[1492]: Updated "/home/core/.ssh/authorized_keys" May 17 00:33:41.926198 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:33:41.928492 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 00:33:42.030731 containerd[1461]: time="2025-05-17T00:33:42.030627284Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:33:42.055396 containerd[1461]: time="2025-05-17T00:33:42.055355878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:33:42.057098 containerd[1461]: time="2025-05-17T00:33:42.057059253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:33:42.057098 containerd[1461]: time="2025-05-17T00:33:42.057087185Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:33:42.057232 containerd[1461]: time="2025-05-17T00:33:42.057104568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:33:42.057336 containerd[1461]: time="2025-05-17T00:33:42.057304783Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:33:42.057336 containerd[1461]: time="2025-05-17T00:33:42.057329940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:33:42.057422 containerd[1461]: time="2025-05-17T00:33:42.057399912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:33:42.057422 containerd[1461]: time="2025-05-17T00:33:42.057416383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:33:42.057666 containerd[1461]: time="2025-05-17T00:33:42.057636375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:33:42.057666 containerd[1461]: time="2025-05-17T00:33:42.057658036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:33:42.057714 containerd[1461]: time="2025-05-17T00:33:42.057676971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:33:42.057714 containerd[1461]: time="2025-05-17T00:33:42.057687241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:33:42.057800 containerd[1461]: time="2025-05-17T00:33:42.057783201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:33:42.058045 containerd[1461]: time="2025-05-17T00:33:42.058018913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:33:42.058193 containerd[1461]: time="2025-05-17T00:33:42.058152293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:33:42.058193 containerd[1461]: time="2025-05-17T00:33:42.058186367Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:33:42.058319 containerd[1461]: time="2025-05-17T00:33:42.058295141Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:33:42.058374 containerd[1461]: time="2025-05-17T00:33:42.058353661Z" level=info msg="metadata content store policy set" policy=shared May 17 00:33:42.188950 containerd[1461]: time="2025-05-17T00:33:42.188808447Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:33:42.188950 containerd[1461]: time="2025-05-17T00:33:42.188889839Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:33:42.188950 containerd[1461]: time="2025-05-17T00:33:42.188906350Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:33:42.188950 containerd[1461]: time="2025-05-17T00:33:42.188931848Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:33:42.188950 containerd[1461]: time="2025-05-17T00:33:42.188946135Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:33:42.189192 containerd[1461]: time="2025-05-17T00:33:42.189124650Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:33:42.189426 containerd[1461]: time="2025-05-17T00:33:42.189393334Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:33:42.189595 containerd[1461]: time="2025-05-17T00:33:42.189503661Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:33:42.189647 containerd[1461]: time="2025-05-17T00:33:42.189525131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:33:42.189647 containerd[1461]: time="2025-05-17T00:33:42.189609940Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:33:42.189647 containerd[1461]: time="2025-05-17T00:33:42.189623465Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189647 containerd[1461]: time="2025-05-17T00:33:42.189639295Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189647 containerd[1461]: time="2025-05-17T00:33:42.189650967Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189668239Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189682205Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189697103Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189709046Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189735345Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189754932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189767435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189779508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189791070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:33:42.189794 containerd[1461]: time="2025-05-17T00:33:42.189802601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189820725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189833950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189846484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189862193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189878854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189890356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189901507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189914972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189928818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189946882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189957602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.189968633Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.190024598Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:33:42.190074 containerd[1461]: time="2025-05-17T00:33:42.190042361Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:33:42.190481 containerd[1461]: time="2025-05-17T00:33:42.190052861Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:33:42.190481 containerd[1461]: time="2025-05-17T00:33:42.190132009Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:33:42.190481 containerd[1461]: time="2025-05-17T00:33:42.190144643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190481 containerd[1461]: time="2025-05-17T00:33:42.190156966Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:33:42.190481 containerd[1461]: time="2025-05-17T00:33:42.190180580Z" level=info msg="NRI interface is disabled by configuration." May 17 00:33:42.190481 containerd[1461]: time="2025-05-17T00:33:42.190190289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:33:42.190669 containerd[1461]: time="2025-05-17T00:33:42.190419298Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:33:42.190669 containerd[1461]: time="2025-05-17T00:33:42.190468320Z" level=info msg="Connect containerd service" May 17 00:33:42.190669 containerd[1461]: time="2025-05-17T00:33:42.190497465Z" level=info msg="using legacy CRI server" May 17 00:33:42.190669 containerd[1461]: time="2025-05-17T00:33:42.190505119Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:33:42.190669 containerd[1461]: time="2025-05-17T00:33:42.190620365Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:33:42.191293 containerd[1461]: time="2025-05-17T00:33:42.191265525Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:33:42.191444 containerd[1461]: time="2025-05-17T00:33:42.191394137Z" level=info msg="Start subscribing containerd event" May 17 00:33:42.191482 containerd[1461]: time="2025-05-17T00:33:42.191449891Z" level=info msg="Start recovering state" May 17 00:33:42.191606 containerd[1461]: time="2025-05-17T00:33:42.191575837Z" level=info msg="Start event monitor" May 17 00:33:42.191606 containerd[1461]: time="2025-05-17T00:33:42.191601636Z" level=info msg="Start snapshots syncer" May 17 00:33:42.191669 containerd[1461]: time="2025-05-17T00:33:42.191610823Z" level=info msg="Start cni network conf syncer for default" May 17 00:33:42.191669 containerd[1461]: time="2025-05-17T00:33:42.191619710Z" level=info msg="Start streaming server" May 17 00:33:42.192154 containerd[1461]: time="2025-05-17T00:33:42.192119046Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:33:42.192281 containerd[1461]: time="2025-05-17T00:33:42.192261133Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:33:42.192797 containerd[1461]: time="2025-05-17T00:33:42.192763906Z" level=info msg="containerd successfully booted in 0.163618s" May 17 00:33:42.192948 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:33:42.214006 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:33:42.238380 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:33:42.248811 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:33:42.256681 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:33:42.256966 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:33:42.260072 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:33:42.276812 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:33:42.282943 tar[1456]: linux-amd64/README.md May 17 00:33:42.288902 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:33:42.291439 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:33:42.292808 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:33:42.305902 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:33:42.802466 systemd-networkd[1401]: eth0: Gained IPv6LL May 17 00:33:42.805676 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:33:42.807513 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:33:42.821812 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 17 00:33:42.824236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:33:42.826364 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:33:42.844811 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 00:33:42.845180 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 17 00:33:42.846929 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:33:42.850650 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:33:43.587222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:33:43.589203 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:33:43.590567 systemd[1]: Startup finished in 921ms (kernel) + 6.036s (initrd) + 4.192s (userspace) = 11.150s. May 17 00:33:43.593794 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:33:44.020687 kubelet[1550]: E0517 00:33:44.020550 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:33:44.025128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:33:44.025408 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:33:44.025896 systemd[1]: kubelet.service: Consumed 1.057s CPU time. May 17 00:33:46.453052 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:33:46.454413 systemd[1]: Started sshd@0-10.0.0.5:22-10.0.0.1:57864.service - OpenSSH per-connection server daemon (10.0.0.1:57864). May 17 00:33:46.504755 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 57864 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:33:46.507149 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:46.516522 systemd-logind[1447]: New session 1 of user core. May 17 00:33:46.517884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:33:46.534842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:33:46.547048 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:33:46.558820 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:33:46.562030 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:33:46.679487 systemd[1568]: Queued start job for default target default.target. May 17 00:33:46.694369 systemd[1568]: Created slice app.slice - User Application Slice. May 17 00:33:46.694407 systemd[1568]: Reached target paths.target - Paths. May 17 00:33:46.694426 systemd[1568]: Reached target timers.target - Timers. May 17 00:33:46.696150 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:33:46.707923 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:33:46.708068 systemd[1568]: Reached target sockets.target - Sockets. May 17 00:33:46.708087 systemd[1568]: Reached target basic.target - Basic System. May 17 00:33:46.708124 systemd[1568]: Reached target default.target - Main User Target. May 17 00:33:46.708158 systemd[1568]: Startup finished in 138ms. May 17 00:33:46.708740 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:33:46.710699 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:33:46.771983 systemd[1]: Started sshd@1-10.0.0.5:22-10.0.0.1:57874.service - OpenSSH per-connection server daemon (10.0.0.1:57874). May 17 00:33:46.813407 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 57874 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:33:46.815254 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:46.819787 systemd-logind[1447]: New session 2 of user core. May 17 00:33:46.829733 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:33:46.884496 sshd[1579]: pam_unix(sshd:session): session closed for user core May 17 00:33:46.892813 systemd[1]: sshd@1-10.0.0.5:22-10.0.0.1:57874.service: Deactivated successfully. May 17 00:33:46.894472 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:33:46.895894 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. May 17 00:33:46.897184 systemd[1]: Started sshd@2-10.0.0.5:22-10.0.0.1:57886.service - OpenSSH per-connection server daemon (10.0.0.1:57886). May 17 00:33:46.898042 systemd-logind[1447]: Removed session 2. May 17 00:33:46.935722 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 57886 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:33:46.937902 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:46.943915 systemd-logind[1447]: New session 3 of user core. May 17 00:33:46.952773 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:33:47.005807 sshd[1586]: pam_unix(sshd:session): session closed for user core May 17 00:33:47.017933 systemd[1]: sshd@2-10.0.0.5:22-10.0.0.1:57886.service: Deactivated successfully. May 17 00:33:47.020182 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:33:47.021999 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. May 17 00:33:47.042083 systemd[1]: Started sshd@3-10.0.0.5:22-10.0.0.1:57902.service - OpenSSH per-connection server daemon (10.0.0.1:57902). May 17 00:33:47.043163 systemd-logind[1447]: Removed session 3. May 17 00:33:47.077644 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 57902 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:33:47.079368 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:47.083801 systemd-logind[1447]: New session 4 of user core. May 17 00:33:47.092773 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:33:47.149588 sshd[1593]: pam_unix(sshd:session): session closed for user core May 17 00:33:47.160892 systemd[1]: sshd@3-10.0.0.5:22-10.0.0.1:57902.service: Deactivated successfully. May 17 00:33:47.162809 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:33:47.164972 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. May 17 00:33:47.176949 systemd[1]: Started sshd@4-10.0.0.5:22-10.0.0.1:57914.service - OpenSSH per-connection server daemon (10.0.0.1:57914). May 17 00:33:47.178141 systemd-logind[1447]: Removed session 4. May 17 00:33:47.211327 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 57914 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:33:47.213332 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:47.217866 systemd-logind[1447]: New session 5 of user core. May 17 00:33:47.231690 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:33:47.291463 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:33:47.291818 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:33:47.444610 sudo[1603]: pam_unix(sudo:session): session closed for user root May 17 00:33:47.446820 sshd[1600]: pam_unix(sshd:session): session closed for user core May 17 00:33:47.461800 systemd[1]: sshd@4-10.0.0.5:22-10.0.0.1:57914.service: Deactivated successfully. May 17 00:33:47.463724 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:33:47.465970 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. May 17 00:33:47.466884 systemd[1]: Started sshd@5-10.0.0.5:22-10.0.0.1:57930.service - OpenSSH per-connection server daemon (10.0.0.1:57930). May 17 00:33:47.467928 systemd-logind[1447]: Removed session 5. May 17 00:33:47.505454 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 57930 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:33:47.506986 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:47.510848 systemd-logind[1447]: New session 6 of user core. May 17 00:33:47.533697 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:33:47.590273 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:33:47.590788 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:33:47.595304 sudo[1612]: pam_unix(sudo:session): session closed for user root May 17 00:33:47.602800 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:33:47.603227 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:33:47.624860 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:33:47.626889 auditctl[1615]: No rules May 17 00:33:47.628454 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:33:47.628798 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:33:47.631083 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:33:47.664572 augenrules[1633]: No rules May 17 00:33:47.666780 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:33:47.668123 sudo[1611]: pam_unix(sudo:session): session closed for user root May 17 00:33:47.670628 sshd[1608]: pam_unix(sshd:session): session closed for user core May 17 00:33:47.688820 systemd[1]: sshd@5-10.0.0.5:22-10.0.0.1:57930.service: Deactivated successfully. May 17 00:33:47.690574 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:33:47.691956 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. May 17 00:33:47.698771 systemd[1]: Started sshd@6-10.0.0.5:22-10.0.0.1:57938.service - OpenSSH per-connection server daemon (10.0.0.1:57938). May 17 00:33:47.699722 systemd-logind[1447]: Removed session 6. May 17 00:33:47.734859 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 57938 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:33:47.736463 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:47.741443 systemd-logind[1447]: New session 7 of user core. May 17 00:33:47.753765 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:33:47.809266 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:33:47.809803 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:33:48.123783 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:33:48.124018 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:33:48.411456 dockerd[1662]: time="2025-05-17T00:33:48.409075570Z" level=info msg="Starting up" May 17 00:33:48.953980 systemd[1]: var-lib-docker-metacopy\x2dcheck861035051-merged.mount: Deactivated successfully. May 17 00:33:48.978504 dockerd[1662]: time="2025-05-17T00:33:48.978445435Z" level=info msg="Loading containers: start." May 17 00:33:49.096570 kernel: Initializing XFRM netlink socket May 17 00:33:49.190323 systemd-networkd[1401]: docker0: Link UP May 17 00:33:49.223169 dockerd[1662]: time="2025-05-17T00:33:49.223078396Z" level=info msg="Loading containers: done." May 17 00:33:49.238595 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1538588476-merged.mount: Deactivated successfully. May 17 00:33:49.242106 dockerd[1662]: time="2025-05-17T00:33:49.242041666Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:33:49.242186 dockerd[1662]: time="2025-05-17T00:33:49.242162933Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:33:49.242295 dockerd[1662]: time="2025-05-17T00:33:49.242275024Z" level=info msg="Daemon has completed initialization" May 17 00:33:49.284630 dockerd[1662]: time="2025-05-17T00:33:49.284546663Z" level=info msg="API listen on /run/docker.sock" May 17 00:33:49.284795 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:33:50.024184 containerd[1461]: time="2025-05-17T00:33:50.024140680Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:33:50.618438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230181097.mount: Deactivated successfully. May 17 00:33:51.894367 containerd[1461]: time="2025-05-17T00:33:51.894302369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:51.895469 containerd[1461]: time="2025-05-17T00:33:51.895376454Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 17 00:33:51.896936 containerd[1461]: time="2025-05-17T00:33:51.896873602Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:51.900221 containerd[1461]: time="2025-05-17T00:33:51.900187979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:51.901495 containerd[1461]: time="2025-05-17T00:33:51.901435188Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.87724762s" May 17 00:33:51.901495 containerd[1461]: time="2025-05-17T00:33:51.901484531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:33:51.902148 containerd[1461]: time="2025-05-17T00:33:51.902093994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:33:54.166621 containerd[1461]: time="2025-05-17T00:33:54.166557680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:54.167471 containerd[1461]: time="2025-05-17T00:33:54.167394911Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 17 00:33:54.168785 containerd[1461]: time="2025-05-17T00:33:54.168735415Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:54.172122 containerd[1461]: time="2025-05-17T00:33:54.172085419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:54.173181 containerd[1461]: time="2025-05-17T00:33:54.173118938Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.270988715s" May 17 00:33:54.173181 containerd[1461]: time="2025-05-17T00:33:54.173163942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:33:54.173797 containerd[1461]: time="2025-05-17T00:33:54.173770610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:33:54.275843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:33:54.295973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:33:54.499873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:33:54.507353 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:33:54.594275 kubelet[1876]: E0517 00:33:54.594197 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:33:54.600808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:33:54.601047 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:33:56.580520 containerd[1461]: time="2025-05-17T00:33:56.580445865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:56.581592 containerd[1461]: time="2025-05-17T00:33:56.581554444Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 17 00:33:56.584362 containerd[1461]: time="2025-05-17T00:33:56.583484865Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:56.587257 containerd[1461]: time="2025-05-17T00:33:56.587220222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:56.588620 containerd[1461]: time="2025-05-17T00:33:56.588557310Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.414731246s" May 17 00:33:56.588664 containerd[1461]: time="2025-05-17T00:33:56.588620418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:33:56.589133 containerd[1461]: time="2025-05-17T00:33:56.589103113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:33:58.522754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908805294.mount: Deactivated successfully. May 17 00:33:59.016092 containerd[1461]: time="2025-05-17T00:33:59.015930498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:59.017065 containerd[1461]: time="2025-05-17T00:33:59.017014591Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 17 00:33:59.018385 containerd[1461]: time="2025-05-17T00:33:59.018312365Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:59.021361 containerd[1461]: time="2025-05-17T00:33:59.021309367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:33:59.022304 containerd[1461]: time="2025-05-17T00:33:59.022234502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.433089861s" May 17 00:33:59.022304 containerd[1461]: time="2025-05-17T00:33:59.022291048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:33:59.022967 containerd[1461]: time="2025-05-17T00:33:59.022862951Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:33:59.578151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664478093.mount: Deactivated successfully. May 17 00:34:03.207724 containerd[1461]: time="2025-05-17T00:34:03.207630218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:03.209342 containerd[1461]: time="2025-05-17T00:34:03.209284782Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:34:03.216928 containerd[1461]: time="2025-05-17T00:34:03.216857065Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:03.222377 containerd[1461]: time="2025-05-17T00:34:03.222309863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:03.225567 containerd[1461]: time="2025-05-17T00:34:03.225501370Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.202603654s" May 17 00:34:03.225567 containerd[1461]: time="2025-05-17T00:34:03.225564007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:34:03.226842 containerd[1461]: time="2025-05-17T00:34:03.226805025Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:34:04.022887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246322150.mount: Deactivated successfully. May 17 00:34:04.036462 containerd[1461]: time="2025-05-17T00:34:04.036350090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:04.038561 containerd[1461]: time="2025-05-17T00:34:04.038445290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:34:04.040196 containerd[1461]: time="2025-05-17T00:34:04.040144066Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:04.043674 containerd[1461]: time="2025-05-17T00:34:04.043605429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:04.044705 containerd[1461]: time="2025-05-17T00:34:04.044628328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 817.625121ms" May 17 00:34:04.044705 containerd[1461]: time="2025-05-17T00:34:04.044687659Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:34:04.045210 containerd[1461]: time="2025-05-17T00:34:04.045168180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:34:04.730390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:34:04.738841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:34:04.741909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967253966.mount: Deactivated successfully. May 17 00:34:04.946322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:34:04.952312 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:34:05.403604 kubelet[1966]: E0517 00:34:05.403500 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:05.408566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:05.408789 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:08.750863 containerd[1461]: time="2025-05-17T00:34:08.750785543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:08.829862 containerd[1461]: time="2025-05-17T00:34:08.829776167Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 17 00:34:08.871232 containerd[1461]: time="2025-05-17T00:34:08.871174078Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:08.931965 containerd[1461]: time="2025-05-17T00:34:08.931896816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:08.933335 containerd[1461]: time="2025-05-17T00:34:08.933281474Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.888077847s" May 17 00:34:08.933335 containerd[1461]: time="2025-05-17T00:34:08.933314275Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:34:11.858986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:34:11.867775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:34:11.942081 systemd[1]: Reloading requested from client PID 2054 ('systemctl') (unit session-7.scope)... May 17 00:34:11.942098 systemd[1]: Reloading... May 17 00:34:12.039571 zram_generator::config[2093]: No configuration found. May 17 00:34:12.443989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:12.529616 systemd[1]: Reloading finished in 587 ms. May 17 00:34:12.603837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:34:12.606677 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:34:12.606956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:34:12.608934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:34:12.797242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:34:12.802397 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:34:12.843122 kubelet[2143]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:12.843122 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:34:12.843122 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:12.843611 kubelet[2143]: I0517 00:34:12.843169 2143 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:34:13.770355 kubelet[2143]: I0517 00:34:13.770285 2143 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:34:13.770355 kubelet[2143]: I0517 00:34:13.770326 2143 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:34:13.770654 kubelet[2143]: I0517 00:34:13.770625 2143 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:34:13.819221 kubelet[2143]: E0517 00:34:13.819144 2143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:13.819773 kubelet[2143]: I0517 00:34:13.819707 2143 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:34:13.831909 kubelet[2143]: E0517 00:34:13.831861 2143 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:34:13.831909 kubelet[2143]: I0517 00:34:13.831894 2143 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:34:13.837356 kubelet[2143]: I0517 00:34:13.837320 2143 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:34:13.839136 kubelet[2143]: I0517 00:34:13.839032 2143 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:34:13.839395 kubelet[2143]: I0517 00:34:13.839130 2143 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:34:13.839502 kubelet[2143]: I0517 00:34:13.839396 2143 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:34:13.839502 kubelet[2143]: I0517 00:34:13.839408 2143 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:34:13.839616 kubelet[2143]: I0517 00:34:13.839596 2143 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:13.843478 kubelet[2143]: I0517 00:34:13.843428 2143 kubelet.go:446] "Attempting to sync node with API server" May 17 00:34:13.845549 kubelet[2143]: I0517 00:34:13.845489 2143 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:34:13.845549 kubelet[2143]: I0517 00:34:13.845551 2143 kubelet.go:352] "Adding apiserver pod source" May 17 00:34:13.845737 kubelet[2143]: I0517 00:34:13.845581 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:34:13.850194 kubelet[2143]: I0517 00:34:13.849330 2143 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:34:13.850194 kubelet[2143]: I0517 00:34:13.849744 2143 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:34:13.850822 kubelet[2143]: W0517 00:34:13.850790 2143 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:34:13.851393 kubelet[2143]: W0517 00:34:13.850891 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:13.851393 kubelet[2143]: E0517 00:34:13.850965 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:13.851393 kubelet[2143]: W0517 00:34:13.850949 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:13.851393 kubelet[2143]: E0517 00:34:13.851048 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:13.853800 kubelet[2143]: I0517 00:34:13.853760 2143 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:34:13.853800 kubelet[2143]: I0517 00:34:13.853803 2143 server.go:1287] "Started kubelet" May 17 00:34:13.855841 kubelet[2143]: I0517 00:34:13.855640 2143 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:34:13.856576 kubelet[2143]: I0517 00:34:13.855986 2143 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:34:13.856576 kubelet[2143]: I0517 00:34:13.856495 2143 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:34:13.856767 kubelet[2143]: I0517 00:34:13.856737 2143 server.go:479] "Adding debug handlers to kubelet server" May 17 00:34:13.856908 kubelet[2143]: I0517 00:34:13.856884 2143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:34:13.857952 kubelet[2143]: I0517 00:34:13.857913 2143 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:34:13.860170 kubelet[2143]: E0517 00:34:13.859815 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:13.860170 kubelet[2143]: I0517 00:34:13.859853 2143 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:34:13.860170 kubelet[2143]: E0517 00:34:13.859847 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" May 17 00:34:13.860170 kubelet[2143]: I0517 00:34:13.860009 2143 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:34:13.860170 kubelet[2143]: I0517 00:34:13.860085 2143 reconciler.go:26] "Reconciler: start to sync state" May 17 00:34:13.860435 kubelet[2143]: W0517 00:34:13.860388 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:13.860513 kubelet[2143]: E0517 00:34:13.860441 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:13.860908 kubelet[2143]: E0517 00:34:13.860880 2143 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:34:13.862806 kubelet[2143]: E0517 00:34:13.860665 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1840294a258887bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:34:13.853775804 +0000 UTC m=+1.047322719,LastTimestamp:2025-05-17 00:34:13.853775804 +0000 UTC m=+1.047322719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:34:13.862806 kubelet[2143]: I0517 00:34:13.862411 2143 factory.go:221] Registration of the containerd container factory successfully May 17 00:34:13.862806 kubelet[2143]: I0517 00:34:13.862423 2143 factory.go:221] Registration of the systemd container factory successfully May 17 00:34:13.862806 kubelet[2143]: I0517 00:34:13.862549 2143 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:34:13.878725 kubelet[2143]: I0517 00:34:13.878696 2143 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:34:13.879137 kubelet[2143]: I0517 00:34:13.878912 2143 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:34:13.879137 kubelet[2143]: I0517 00:34:13.878938 2143 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:13.881475 kubelet[2143]: I0517 00:34:13.881436 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:34:13.882892 kubelet[2143]: I0517 00:34:13.882856 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:34:13.882892 kubelet[2143]: I0517 00:34:13.882887 2143 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:34:13.883902 kubelet[2143]: I0517 00:34:13.882910 2143 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:34:13.883902 kubelet[2143]: I0517 00:34:13.882920 2143 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:34:13.883902 kubelet[2143]: E0517 00:34:13.882973 2143 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:34:13.884774 kubelet[2143]: W0517 00:34:13.884449 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:13.884774 kubelet[2143]: E0517 00:34:13.884506 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:13.960608 kubelet[2143]: E0517 00:34:13.960580 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:13.983873 kubelet[2143]: E0517 00:34:13.983835 2143 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:34:14.060724 kubelet[2143]: E0517 00:34:14.060631 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:14.060724 kubelet[2143]: E0517 00:34:14.060664 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" May 17 00:34:14.160978 kubelet[2143]: E0517 00:34:14.160940 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:14.184243 kubelet[2143]: E0517 00:34:14.184199 2143 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:34:14.234877 kubelet[2143]: I0517 00:34:14.234843 2143 policy_none.go:49] "None policy: Start" May 17 00:34:14.234877 kubelet[2143]: I0517 00:34:14.234863 2143 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:34:14.234877 kubelet[2143]: I0517 00:34:14.234876 2143 state_mem.go:35] "Initializing new in-memory state store" May 17 00:34:14.242861 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:34:14.258009 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:34:14.261036 kubelet[2143]: E0517 00:34:14.261000 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:14.261190 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:34:14.272391 kubelet[2143]: I0517 00:34:14.272357 2143 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:34:14.272651 kubelet[2143]: I0517 00:34:14.272593 2143 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:34:14.272651 kubelet[2143]: I0517 00:34:14.272609 2143 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:34:14.272842 kubelet[2143]: I0517 00:34:14.272826 2143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:34:14.273380 kubelet[2143]: E0517 00:34:14.273362 2143 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:34:14.273437 kubelet[2143]: E0517 00:34:14.273398 2143 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:34:14.375130 kubelet[2143]: I0517 00:34:14.374977 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:34:14.375381 kubelet[2143]: E0517 00:34:14.375340 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" May 17 00:34:14.462325 kubelet[2143]: E0517 00:34:14.462259 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" May 17 00:34:14.577062 kubelet[2143]: I0517 00:34:14.577023 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:34:14.577418 kubelet[2143]: E0517 00:34:14.577382 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" May 17 00:34:14.592085 systemd[1]: Created slice kubepods-burstable-podcee2c3622f8487b83d29f27b29762f73.slice - libcontainer container kubepods-burstable-podcee2c3622f8487b83d29f27b29762f73.slice. May 17 00:34:14.606839 kubelet[2143]: E0517 00:34:14.606804 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:34:14.610318 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 17 00:34:14.623407 kubelet[2143]: E0517 00:34:14.623352 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:34:14.626054 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 17 00:34:14.628175 kubelet[2143]: E0517 00:34:14.628134 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:34:14.663683 kubelet[2143]: I0517 00:34:14.663632 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:14.663683 kubelet[2143]: I0517 00:34:14.663677 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:14.663683 kubelet[2143]: I0517 00:34:14.663712 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cee2c3622f8487b83d29f27b29762f73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cee2c3622f8487b83d29f27b29762f73\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:14.663925 kubelet[2143]: I0517 00:34:14.663740 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cee2c3622f8487b83d29f27b29762f73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cee2c3622f8487b83d29f27b29762f73\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:14.663925 kubelet[2143]: I0517 00:34:14.663760 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cee2c3622f8487b83d29f27b29762f73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cee2c3622f8487b83d29f27b29762f73\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:14.663925 kubelet[2143]: I0517 00:34:14.663780 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:34:14.663925 kubelet[2143]: I0517 00:34:14.663829 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:14.663925 kubelet[2143]: I0517 00:34:14.663851 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:14.664076 kubelet[2143]: I0517 00:34:14.663870 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:14.794813 kubelet[2143]: W0517 00:34:14.794767 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:14.794972 kubelet[2143]: E0517 00:34:14.794825 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:14.879815 kubelet[2143]: W0517 00:34:14.879615 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:14.879815 kubelet[2143]: E0517 00:34:14.879733 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:14.907592 kubelet[2143]: E0517 00:34:14.907514 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:14.908359 containerd[1461]: time="2025-05-17T00:34:14.908309865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cee2c3622f8487b83d29f27b29762f73,Namespace:kube-system,Attempt:0,}" May 17 00:34:14.924806 kubelet[2143]: E0517 00:34:14.924771 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:14.927625 containerd[1461]: time="2025-05-17T00:34:14.927582957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 17 00:34:14.928891 kubelet[2143]: E0517 00:34:14.928818 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:14.929395 containerd[1461]: time="2025-05-17T00:34:14.929350958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 17 00:34:14.936155 kubelet[2143]: W0517 00:34:14.936086 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:14.936200 kubelet[2143]: E0517 00:34:14.936163 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:14.979338 kubelet[2143]: I0517 00:34:14.979291 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:34:14.979833 kubelet[2143]: E0517 00:34:14.979770 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" May 17 00:34:15.038631 kubelet[2143]: W0517 00:34:15.038504 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:15.038732 kubelet[2143]: E0517 00:34:15.038639 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:15.262928 kubelet[2143]: E0517 00:34:15.262798 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" May 17 00:34:15.781159 kubelet[2143]: I0517 00:34:15.781116 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:34:15.781573 kubelet[2143]: E0517 00:34:15.781512 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" May 17 00:34:15.947021 kubelet[2143]: E0517 00:34:15.946965 2143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:16.553191 kubelet[2143]: W0517 00:34:16.553119 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:16.553191 kubelet[2143]: E0517 00:34:16.553174 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:16.578134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241026097.mount: Deactivated successfully. May 17 00:34:16.585305 containerd[1461]: time="2025-05-17T00:34:16.585227781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:34:16.587007 containerd[1461]: time="2025-05-17T00:34:16.586933059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:34:16.588011 containerd[1461]: time="2025-05-17T00:34:16.587970619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:34:16.588994 containerd[1461]: time="2025-05-17T00:34:16.588941371Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:34:16.589674 containerd[1461]: time="2025-05-17T00:34:16.589620459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:34:16.590606 containerd[1461]: time="2025-05-17T00:34:16.590551293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:34:16.591630 containerd[1461]: time="2025-05-17T00:34:16.591582962Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:34:16.596040 containerd[1461]: time="2025-05-17T00:34:16.595975680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:34:16.596853 containerd[1461]: time="2025-05-17T00:34:16.596828445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.669162105s" May 17 00:34:16.598132 containerd[1461]: time="2025-05-17T00:34:16.598089144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.689699996s" May 17 00:34:16.599806 containerd[1461]: time="2025-05-17T00:34:16.599757771Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.670327029s" May 17 00:34:16.719904 kubelet[2143]: W0517 00:34:16.719575 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:16.719904 kubelet[2143]: E0517 00:34:16.719627 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:16.729739 containerd[1461]: time="2025-05-17T00:34:16.729421129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:16.729739 containerd[1461]: time="2025-05-17T00:34:16.729494590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:16.729739 containerd[1461]: time="2025-05-17T00:34:16.729509368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:16.729739 containerd[1461]: time="2025-05-17T00:34:16.729607537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:16.730440 containerd[1461]: time="2025-05-17T00:34:16.729892367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:16.730440 containerd[1461]: time="2025-05-17T00:34:16.729949377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:16.730440 containerd[1461]: time="2025-05-17T00:34:16.729961150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:16.730440 containerd[1461]: time="2025-05-17T00:34:16.730042316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:16.730798 containerd[1461]: time="2025-05-17T00:34:16.730621011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:16.730798 containerd[1461]: time="2025-05-17T00:34:16.730700374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:16.730798 containerd[1461]: time="2025-05-17T00:34:16.730722658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:16.731307 containerd[1461]: time="2025-05-17T00:34:16.730905800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:16.762682 systemd[1]: Started cri-containerd-277695e0cf53e4ba1c16afab267f01c9bcb536db7206a90d32fde7bcf7362f7a.scope - libcontainer container 277695e0cf53e4ba1c16afab267f01c9bcb536db7206a90d32fde7bcf7362f7a. May 17 00:34:16.766935 systemd[1]: Started cri-containerd-92d5d154173abc9853d8838dfd1cabe160c811e5306702682aa3a5979e3395f7.scope - libcontainer container 92d5d154173abc9853d8838dfd1cabe160c811e5306702682aa3a5979e3395f7. May 17 00:34:16.769190 systemd[1]: Started cri-containerd-f83bf843da482692fd54651381ad873d4ccee8699da05d8ac9a0d92f8c09017a.scope - libcontainer container f83bf843da482692fd54651381ad873d4ccee8699da05d8ac9a0d92f8c09017a. May 17 00:34:16.801284 containerd[1461]: time="2025-05-17T00:34:16.801226861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"277695e0cf53e4ba1c16afab267f01c9bcb536db7206a90d32fde7bcf7362f7a\"" May 17 00:34:16.803010 kubelet[2143]: E0517 00:34:16.802963 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:16.807574 containerd[1461]: time="2025-05-17T00:34:16.807440229Z" level=info msg="CreateContainer within sandbox \"277695e0cf53e4ba1c16afab267f01c9bcb536db7206a90d32fde7bcf7362f7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:34:16.812290 containerd[1461]: time="2025-05-17T00:34:16.812090363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cee2c3622f8487b83d29f27b29762f73,Namespace:kube-system,Attempt:0,} returns sandbox id \"f83bf843da482692fd54651381ad873d4ccee8699da05d8ac9a0d92f8c09017a\"" May 17 00:34:16.813008 kubelet[2143]: E0517 00:34:16.812976 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:16.813389 kubelet[2143]: W0517 00:34:16.813282 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused May 17 00:34:16.813389 kubelet[2143]: E0517 00:34:16.813353 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:16.814041 containerd[1461]: time="2025-05-17T00:34:16.813938716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"92d5d154173abc9853d8838dfd1cabe160c811e5306702682aa3a5979e3395f7\"" May 17 00:34:16.814969 kubelet[2143]: E0517 00:34:16.814858 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:16.815827 containerd[1461]: time="2025-05-17T00:34:16.815795165Z" level=info msg="CreateContainer within sandbox \"f83bf843da482692fd54651381ad873d4ccee8699da05d8ac9a0d92f8c09017a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:34:16.817314 containerd[1461]: time="2025-05-17T00:34:16.817275789Z" level=info msg="CreateContainer within sandbox \"92d5d154173abc9853d8838dfd1cabe160c811e5306702682aa3a5979e3395f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:34:16.839287 containerd[1461]: time="2025-05-17T00:34:16.839242906Z" level=info msg="CreateContainer within sandbox \"277695e0cf53e4ba1c16afab267f01c9bcb536db7206a90d32fde7bcf7362f7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"819bdb4c51be9bd54b9cd421db2293a04711bce787818a8e617e391c0f722e1f\"" May 17 00:34:16.839976 containerd[1461]: time="2025-05-17T00:34:16.839938557Z" level=info msg="StartContainer for \"819bdb4c51be9bd54b9cd421db2293a04711bce787818a8e617e391c0f722e1f\"" May 17 00:34:16.845783 containerd[1461]: time="2025-05-17T00:34:16.845735452Z" level=info msg="CreateContainer within sandbox \"f83bf843da482692fd54651381ad873d4ccee8699da05d8ac9a0d92f8c09017a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bcb0cda730dbc480c4bdbb76c16e4f10247293ba05254661f0ba875abac5a67b\"" May 17 00:34:16.846385 containerd[1461]: time="2025-05-17T00:34:16.846232360Z" level=info msg="StartContainer for \"bcb0cda730dbc480c4bdbb76c16e4f10247293ba05254661f0ba875abac5a67b\"" May 17 00:34:16.853233 containerd[1461]: time="2025-05-17T00:34:16.853140948Z" level=info msg="CreateContainer within sandbox \"92d5d154173abc9853d8838dfd1cabe160c811e5306702682aa3a5979e3395f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c19185404745711b36c08b5c20c49ce8549789e9c0aa4b9ecb1a47dd6db66639\"" May 17 00:34:16.853835 containerd[1461]: time="2025-05-17T00:34:16.853805369Z" level=info msg="StartContainer for \"c19185404745711b36c08b5c20c49ce8549789e9c0aa4b9ecb1a47dd6db66639\"" May 17 00:34:16.863763 kubelet[2143]: E0517 00:34:16.863729 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="3.2s" May 17 00:34:16.870310 systemd[1]: Started cri-containerd-819bdb4c51be9bd54b9cd421db2293a04711bce787818a8e617e391c0f722e1f.scope - libcontainer container 819bdb4c51be9bd54b9cd421db2293a04711bce787818a8e617e391c0f722e1f. May 17 00:34:16.874201 systemd[1]: Started cri-containerd-bcb0cda730dbc480c4bdbb76c16e4f10247293ba05254661f0ba875abac5a67b.scope - libcontainer container bcb0cda730dbc480c4bdbb76c16e4f10247293ba05254661f0ba875abac5a67b. May 17 00:34:16.881517 systemd[1]: Started cri-containerd-c19185404745711b36c08b5c20c49ce8549789e9c0aa4b9ecb1a47dd6db66639.scope - libcontainer container c19185404745711b36c08b5c20c49ce8549789e9c0aa4b9ecb1a47dd6db66639. May 17 00:34:16.925278 containerd[1461]: time="2025-05-17T00:34:16.924920360Z" level=info msg="StartContainer for \"819bdb4c51be9bd54b9cd421db2293a04711bce787818a8e617e391c0f722e1f\" returns successfully" May 17 00:34:16.925278 containerd[1461]: time="2025-05-17T00:34:16.925005734Z" level=info msg="StartContainer for \"bcb0cda730dbc480c4bdbb76c16e4f10247293ba05254661f0ba875abac5a67b\" returns successfully" May 17 00:34:16.940634 containerd[1461]: time="2025-05-17T00:34:16.940519942Z" level=info msg="StartContainer for \"c19185404745711b36c08b5c20c49ce8549789e9c0aa4b9ecb1a47dd6db66639\" returns successfully" May 17 00:34:17.383508 kubelet[2143]: I0517 00:34:17.382958 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:34:17.900989 kubelet[2143]: E0517 00:34:17.900943 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:34:17.901113 kubelet[2143]: E0517 00:34:17.901079 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:17.903457 kubelet[2143]: E0517 00:34:17.902638 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:34:17.903457 kubelet[2143]: E0517 00:34:17.902856 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:17.903870 kubelet[2143]: E0517 00:34:17.903850 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:34:17.903967 kubelet[2143]: E0517 00:34:17.903942 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:18.385479 kubelet[2143]: I0517 00:34:18.385342 2143 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:34:18.459929 kubelet[2143]: I0517 00:34:18.459885 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:34:18.619158 kubelet[2143]: E0517 00:34:18.619071 2143 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 00:34:18.619158 kubelet[2143]: I0517 00:34:18.619099 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:34:18.620718 kubelet[2143]: E0517 00:34:18.620668 2143 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 00:34:18.620718 kubelet[2143]: I0517 00:34:18.620690 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:34:18.621874 kubelet[2143]: E0517 00:34:18.621844 2143 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 00:34:18.854646 kubelet[2143]: I0517 00:34:18.854499 2143 apiserver.go:52] "Watching apiserver" May 17 00:34:18.861138 kubelet[2143]: I0517 00:34:18.861100 2143 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:34:18.904877 kubelet[2143]: I0517 00:34:18.904855 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:34:18.905151 kubelet[2143]: I0517 00:34:18.905091 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:34:18.905194 kubelet[2143]: I0517 00:34:18.905164 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:34:18.906848 kubelet[2143]: E0517 00:34:18.906827 2143 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 00:34:18.906920 kubelet[2143]: E0517 00:34:18.906861 2143 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 00:34:18.907024 kubelet[2143]: E0517 00:34:18.906965 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:18.907088 kubelet[2143]: E0517 00:34:18.907063 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:18.907323 kubelet[2143]: E0517 00:34:18.907293 2143 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 00:34:18.907448 kubelet[2143]: E0517 00:34:18.907406 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:19.906543 kubelet[2143]: I0517 00:34:19.906513 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:34:19.906952 kubelet[2143]: I0517 00:34:19.906636 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:34:19.911330 kubelet[2143]: E0517 00:34:19.911306 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:19.913459 kubelet[2143]: E0517 00:34:19.913429 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:20.277669 systemd[1]: Reloading requested from client PID 2426 ('systemctl') (unit session-7.scope)... May 17 00:34:20.277688 systemd[1]: Reloading... May 17 00:34:20.372676 zram_generator::config[2468]: No configuration found. May 17 00:34:20.516205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:20.608602 systemd[1]: Reloading finished in 330 ms. May 17 00:34:20.657140 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:34:20.676958 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:34:20.677288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:34:20.685931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:34:20.848436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:34:20.856877 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:34:20.894425 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:20.894425 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:34:20.894425 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:20.894425 kubelet[2510]: I0517 00:34:20.894154 2510 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:34:20.902442 kubelet[2510]: I0517 00:34:20.902406 2510 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:34:20.902442 kubelet[2510]: I0517 00:34:20.902432 2510 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:34:20.902711 kubelet[2510]: I0517 00:34:20.902686 2510 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:34:20.903842 kubelet[2510]: I0517 00:34:20.903817 2510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:34:20.905750 kubelet[2510]: I0517 00:34:20.905719 2510 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:34:20.909063 kubelet[2510]: E0517 00:34:20.909020 2510 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:34:20.909063 kubelet[2510]: I0517 00:34:20.909056 2510 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:34:20.917079 kubelet[2510]: I0517 00:34:20.917050 2510 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:34:20.917306 kubelet[2510]: I0517 00:34:20.917273 2510 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:34:20.917471 kubelet[2510]: I0517 00:34:20.917300 2510 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:34:20.917561 kubelet[2510]: I0517 00:34:20.917475 2510 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:34:20.917561 kubelet[2510]: I0517 00:34:20.917485 2510 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:34:20.917561 kubelet[2510]: I0517 00:34:20.917547 2510 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:20.917715 kubelet[2510]: I0517 00:34:20.917696 2510 kubelet.go:446] "Attempting to sync node with API server" May 17 00:34:20.917739 kubelet[2510]: I0517 00:34:20.917720 2510 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:34:20.917739 kubelet[2510]: I0517 00:34:20.917737 2510 kubelet.go:352] "Adding apiserver pod source" May 17 00:34:20.917780 kubelet[2510]: I0517 00:34:20.917747 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:34:20.918457 kubelet[2510]: I0517 00:34:20.918431 2510 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:34:20.918845 kubelet[2510]: I0517 00:34:20.918822 2510 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:34:20.919307 kubelet[2510]: I0517 00:34:20.919235 2510 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:34:20.919307 kubelet[2510]: I0517 00:34:20.919264 2510 server.go:1287] "Started kubelet" May 17 00:34:20.921828 kubelet[2510]: I0517 00:34:20.919466 2510 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:34:20.921828 kubelet[2510]: I0517 00:34:20.919508 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:34:20.921828 kubelet[2510]: I0517 00:34:20.919749 2510 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:34:20.921828 kubelet[2510]: I0517 00:34:20.920277 2510 server.go:479] "Adding debug handlers to kubelet server" May 17 00:34:20.922001 kubelet[2510]: I0517 00:34:20.921967 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:34:20.922327 kubelet[2510]: I0517 00:34:20.922297 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:34:20.924960 kubelet[2510]: E0517 00:34:20.924924 2510 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:20.924999 kubelet[2510]: I0517 00:34:20.924980 2510 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:34:20.925176 kubelet[2510]: I0517 00:34:20.925154 2510 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:34:20.925398 kubelet[2510]: I0517 00:34:20.925348 2510 reconciler.go:26] "Reconciler: start to sync state" May 17 00:34:20.925551 kubelet[2510]: I0517 00:34:20.925498 2510 factory.go:221] Registration of the systemd container factory successfully May 17 00:34:20.925651 kubelet[2510]: I0517 00:34:20.925624 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:34:20.927396 kubelet[2510]: E0517 00:34:20.926495 2510 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:34:20.927762 kubelet[2510]: I0517 00:34:20.927734 2510 factory.go:221] Registration of the containerd container factory successfully May 17 00:34:20.934020 kubelet[2510]: I0517 00:34:20.933964 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:34:20.936706 kubelet[2510]: I0517 00:34:20.936344 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:34:20.936706 kubelet[2510]: I0517 00:34:20.936384 2510 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:34:20.936706 kubelet[2510]: I0517 00:34:20.936407 2510 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:34:20.936706 kubelet[2510]: I0517 00:34:20.936414 2510 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:34:20.936706 kubelet[2510]: E0517 00:34:20.936466 2510 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:34:20.961091 kubelet[2510]: I0517 00:34:20.961063 2510 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:34:20.961091 kubelet[2510]: I0517 00:34:20.961076 2510 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:34:20.961091 kubelet[2510]: I0517 00:34:20.961094 2510 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:20.961316 kubelet[2510]: I0517 00:34:20.961240 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:34:20.961316 kubelet[2510]: I0517 00:34:20.961251 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:34:20.961316 kubelet[2510]: I0517 00:34:20.961268 2510 policy_none.go:49] "None policy: Start" May 17 00:34:20.961316 kubelet[2510]: I0517 00:34:20.961277 2510 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:34:20.961316 kubelet[2510]: I0517 00:34:20.961286 2510 state_mem.go:35] "Initializing new in-memory state store" May 17 00:34:20.961430 kubelet[2510]: I0517 00:34:20.961385 2510 state_mem.go:75] "Updated machine memory state" May 17 00:34:20.965407 kubelet[2510]: I0517 00:34:20.965370 2510 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:34:20.965667 kubelet[2510]: I0517 00:34:20.965565 2510 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:34:20.965667 kubelet[2510]: I0517 00:34:20.965582 2510 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:34:20.965747 kubelet[2510]: I0517 00:34:20.965740 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:34:20.966341 kubelet[2510]: E0517 00:34:20.966322 2510 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:34:21.037252 kubelet[2510]: I0517 00:34:21.037220 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:34:21.037404 kubelet[2510]: I0517 00:34:21.037272 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:34:21.037404 kubelet[2510]: I0517 00:34:21.037352 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:34:21.072213 kubelet[2510]: I0517 00:34:21.072169 2510 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:34:21.113129 kubelet[2510]: E0517 00:34:21.112941 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 17 00:34:21.113129 kubelet[2510]: E0517 00:34:21.113054 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:34:21.169754 kubelet[2510]: I0517 00:34:21.169637 2510 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 17 00:34:21.169754 kubelet[2510]: I0517 00:34:21.169727 2510 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:34:21.227159 kubelet[2510]: I0517 00:34:21.227105 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:34:21.227159 kubelet[2510]: I0517 00:34:21.227157 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cee2c3622f8487b83d29f27b29762f73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cee2c3622f8487b83d29f27b29762f73\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:21.227350 kubelet[2510]: I0517 00:34:21.227184 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:21.227350 kubelet[2510]: I0517 00:34:21.227205 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:21.227350 kubelet[2510]: I0517 00:34:21.227229 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:21.227350 kubelet[2510]: I0517 00:34:21.227247 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:21.227350 kubelet[2510]: I0517 00:34:21.227266 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cee2c3622f8487b83d29f27b29762f73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cee2c3622f8487b83d29f27b29762f73\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:21.227476 kubelet[2510]: I0517 00:34:21.227282 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cee2c3622f8487b83d29f27b29762f73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cee2c3622f8487b83d29f27b29762f73\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:21.227476 kubelet[2510]: I0517 00:34:21.227299 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:21.404657 kubelet[2510]: E0517 00:34:21.404617 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:21.413296 kubelet[2510]: E0517 00:34:21.413211 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:21.413448 kubelet[2510]: E0517 00:34:21.413322 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:21.918439 kubelet[2510]: I0517 00:34:21.918401 2510 apiserver.go:52] "Watching apiserver" May 17 00:34:21.925525 kubelet[2510]: I0517 00:34:21.925491 2510 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:34:21.950597 kubelet[2510]: E0517 00:34:21.949560 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:21.950597 kubelet[2510]: I0517 00:34:21.949600 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:34:21.950597 kubelet[2510]: I0517 00:34:21.949646 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:34:22.165571 kubelet[2510]: E0517 00:34:22.165276 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:34:22.165571 kubelet[2510]: E0517 00:34:22.165518 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:22.166424 kubelet[2510]: E0517 00:34:22.166403 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 17 00:34:22.166566 kubelet[2510]: E0517 00:34:22.166524 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:22.618087 kubelet[2510]: I0517 00:34:22.617816 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.617795247 podStartE2EDuration="1.617795247s" podCreationTimestamp="2025-05-17 00:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:22.610490477 +0000 UTC m=+1.749498760" watchObservedRunningTime="2025-05-17 00:34:22.617795247 +0000 UTC m=+1.756803531" May 17 00:34:22.618087 kubelet[2510]: I0517 00:34:22.617994 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.617985039 podStartE2EDuration="3.617985039s" podCreationTimestamp="2025-05-17 00:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:22.617765411 +0000 UTC m=+1.756773684" watchObservedRunningTime="2025-05-17 00:34:22.617985039 +0000 UTC m=+1.756993333" May 17 00:34:22.634165 kubelet[2510]: I0517 00:34:22.634001 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.633930613 podStartE2EDuration="3.633930613s" podCreationTimestamp="2025-05-17 00:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:22.625817138 +0000 UTC m=+1.764825421" watchObservedRunningTime="2025-05-17 00:34:22.633930613 +0000 UTC m=+1.772938916" May 17 00:34:22.960145 kubelet[2510]: E0517 00:34:22.959998 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:22.961906 kubelet[2510]: E0517 00:34:22.961360 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:22.961906 kubelet[2510]: E0517 00:34:22.961861 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:23.962090 kubelet[2510]: E0517 00:34:23.962047 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:25.007176 kubelet[2510]: I0517 00:34:25.007110 2510 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:34:25.007739 containerd[1461]: time="2025-05-17T00:34:25.007694292Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:34:25.008102 kubelet[2510]: I0517 00:34:25.008023 2510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:34:25.524056 kubelet[2510]: E0517 00:34:25.523976 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:26.085832 kubelet[2510]: I0517 00:34:26.076793 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8a40753-a88a-44e8-895a-a13e3a2bcf36-lib-modules\") pod \"kube-proxy-7rnfr\" (UID: \"d8a40753-a88a-44e8-895a-a13e3a2bcf36\") " pod="kube-system/kube-proxy-7rnfr" May 17 00:34:26.085832 kubelet[2510]: I0517 00:34:26.076872 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8a40753-a88a-44e8-895a-a13e3a2bcf36-kube-proxy\") pod \"kube-proxy-7rnfr\" (UID: \"d8a40753-a88a-44e8-895a-a13e3a2bcf36\") " pod="kube-system/kube-proxy-7rnfr" May 17 00:34:26.085832 kubelet[2510]: I0517 00:34:26.076906 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8a40753-a88a-44e8-895a-a13e3a2bcf36-xtables-lock\") pod \"kube-proxy-7rnfr\" (UID: \"d8a40753-a88a-44e8-895a-a13e3a2bcf36\") " pod="kube-system/kube-proxy-7rnfr" May 17 00:34:26.085832 kubelet[2510]: I0517 00:34:26.076967 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2vnx\" (UniqueName: \"kubernetes.io/projected/d8a40753-a88a-44e8-895a-a13e3a2bcf36-kube-api-access-v2vnx\") pod \"kube-proxy-7rnfr\" (UID: \"d8a40753-a88a-44e8-895a-a13e3a2bcf36\") " pod="kube-system/kube-proxy-7rnfr" May 17 00:34:26.085832 kubelet[2510]: W0517 00:34:26.077459 2510 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 17 00:34:26.088208 kubelet[2510]: E0517 00:34:26.077561 2510 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 17 00:34:26.088208 kubelet[2510]: I0517 00:34:26.077648 2510 status_manager.go:890] "Failed to get status for pod" podUID="d8a40753-a88a-44e8-895a-a13e3a2bcf36" pod="kube-system/kube-proxy-7rnfr" err="pods \"kube-proxy-7rnfr\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 17 00:34:26.088208 kubelet[2510]: W0517 00:34:26.077844 2510 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 17 00:34:26.088208 kubelet[2510]: E0517 00:34:26.077867 2510 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 17 00:34:26.088694 systemd[1]: Created slice kubepods-besteffort-podd8a40753_a88a_44e8_895a_a13e3a2bcf36.slice - libcontainer container kubepods-besteffort-podd8a40753_a88a_44e8_895a_a13e3a2bcf36.slice. May 17 00:34:26.326191 systemd[1]: Created slice kubepods-besteffort-pod4ce5f5af_6449_425c_9fd8_1150c9aca23c.slice - libcontainer container kubepods-besteffort-pod4ce5f5af_6449_425c_9fd8_1150c9aca23c.slice. May 17 00:34:26.388157 kubelet[2510]: I0517 00:34:26.387695 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ce5f5af-6449-425c-9fd8-1150c9aca23c-var-lib-calico\") pod \"tigera-operator-844669ff44-k7tj7\" (UID: \"4ce5f5af-6449-425c-9fd8-1150c9aca23c\") " pod="tigera-operator/tigera-operator-844669ff44-k7tj7" May 17 00:34:26.388157 kubelet[2510]: I0517 00:34:26.387770 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flj79\" (UniqueName: \"kubernetes.io/projected/4ce5f5af-6449-425c-9fd8-1150c9aca23c-kube-api-access-flj79\") pod \"tigera-operator-844669ff44-k7tj7\" (UID: \"4ce5f5af-6449-425c-9fd8-1150c9aca23c\") " pod="tigera-operator/tigera-operator-844669ff44-k7tj7" May 17 00:34:26.630894 containerd[1461]: time="2025-05-17T00:34:26.630823521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-k7tj7,Uid:4ce5f5af-6449-425c-9fd8-1150c9aca23c,Namespace:tigera-operator,Attempt:0,}" May 17 00:34:26.691688 containerd[1461]: time="2025-05-17T00:34:26.690034529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:26.691688 containerd[1461]: time="2025-05-17T00:34:26.690106316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:26.691688 containerd[1461]: time="2025-05-17T00:34:26.690124090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:26.691688 containerd[1461]: time="2025-05-17T00:34:26.690414723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:26.732104 systemd[1]: Started cri-containerd-f40c971b1ffa245b2ea44af1dc42ffb5915510f280f84163cf83e3c2a095f7d9.scope - libcontainer container f40c971b1ffa245b2ea44af1dc42ffb5915510f280f84163cf83e3c2a095f7d9. May 17 00:34:26.801518 containerd[1461]: time="2025-05-17T00:34:26.801431538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-k7tj7,Uid:4ce5f5af-6449-425c-9fd8-1150c9aca23c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f40c971b1ffa245b2ea44af1dc42ffb5915510f280f84163cf83e3c2a095f7d9\"" May 17 00:34:26.803874 containerd[1461]: time="2025-05-17T00:34:26.803726415Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:34:27.005228 kubelet[2510]: E0517 00:34:27.005013 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:27.187207 kubelet[2510]: E0517 00:34:27.187101 2510 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 17 00:34:27.187812 kubelet[2510]: E0517 00:34:27.187230 2510 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d8a40753-a88a-44e8-895a-a13e3a2bcf36-kube-proxy podName:d8a40753-a88a-44e8-895a-a13e3a2bcf36 nodeName:}" failed. No retries permitted until 2025-05-17 00:34:27.687202957 +0000 UTC m=+6.826211240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d8a40753-a88a-44e8-895a-a13e3a2bcf36-kube-proxy") pod "kube-proxy-7rnfr" (UID: "d8a40753-a88a-44e8-895a-a13e3a2bcf36") : failed to sync configmap cache: timed out waiting for the condition May 17 00:34:27.443423 update_engine[1451]: I20250517 00:34:27.441401 1451 update_attempter.cc:509] Updating boot flags... May 17 00:34:27.519577 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2614) May 17 00:34:27.905843 kubelet[2510]: E0517 00:34:27.904749 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:27.905993 containerd[1461]: time="2025-05-17T00:34:27.905355836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rnfr,Uid:d8a40753-a88a-44e8-895a-a13e3a2bcf36,Namespace:kube-system,Attempt:0,}" May 17 00:34:27.979588 kubelet[2510]: E0517 00:34:27.978804 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:27.982600 containerd[1461]: time="2025-05-17T00:34:27.982464003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:27.984746 containerd[1461]: time="2025-05-17T00:34:27.982573500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:27.984746 containerd[1461]: time="2025-05-17T00:34:27.984641882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:27.985471 containerd[1461]: time="2025-05-17T00:34:27.984774806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:28.039570 systemd[1]: Started cri-containerd-2374b3c100fe2c691ac2de9ec3835191d65d8eae082b2dd9604b3e2987ce3905.scope - libcontainer container 2374b3c100fe2c691ac2de9ec3835191d65d8eae082b2dd9604b3e2987ce3905. May 17 00:34:28.109442 containerd[1461]: time="2025-05-17T00:34:28.105727514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rnfr,Uid:d8a40753-a88a-44e8-895a-a13e3a2bcf36,Namespace:kube-system,Attempt:0,} returns sandbox id \"2374b3c100fe2c691ac2de9ec3835191d65d8eae082b2dd9604b3e2987ce3905\"" May 17 00:34:28.109633 kubelet[2510]: E0517 00:34:28.106741 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:28.111027 containerd[1461]: time="2025-05-17T00:34:28.110951639Z" level=info msg="CreateContainer within sandbox \"2374b3c100fe2c691ac2de9ec3835191d65d8eae082b2dd9604b3e2987ce3905\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:34:28.133503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493566850.mount: Deactivated successfully. May 17 00:34:28.136867 containerd[1461]: time="2025-05-17T00:34:28.136794693Z" level=info msg="CreateContainer within sandbox \"2374b3c100fe2c691ac2de9ec3835191d65d8eae082b2dd9604b3e2987ce3905\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"509dd50596b581dd17c741499463f84d5abe7a8d7a1ffa1f1e9d88f232ec4962\"" May 17 00:34:28.137689 containerd[1461]: time="2025-05-17T00:34:28.137613337Z" level=info msg="StartContainer for \"509dd50596b581dd17c741499463f84d5abe7a8d7a1ffa1f1e9d88f232ec4962\"" May 17 00:34:28.178860 systemd[1]: Started cri-containerd-509dd50596b581dd17c741499463f84d5abe7a8d7a1ffa1f1e9d88f232ec4962.scope - libcontainer container 509dd50596b581dd17c741499463f84d5abe7a8d7a1ffa1f1e9d88f232ec4962. May 17 00:34:28.252178 containerd[1461]: time="2025-05-17T00:34:28.252110537Z" level=info msg="StartContainer for \"509dd50596b581dd17c741499463f84d5abe7a8d7a1ffa1f1e9d88f232ec4962\" returns successfully" May 17 00:34:28.968723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880877664.mount: Deactivated successfully. May 17 00:34:28.982334 kubelet[2510]: E0517 00:34:28.982179 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:28.983349 kubelet[2510]: E0517 00:34:28.982692 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:29.005726 kubelet[2510]: I0517 00:34:29.004591 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7rnfr" podStartSLOduration=4.004566901 podStartE2EDuration="4.004566901s" podCreationTimestamp="2025-05-17 00:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:29.004567763 +0000 UTC m=+8.143576066" watchObservedRunningTime="2025-05-17 00:34:29.004566901 +0000 UTC m=+8.143575184" May 17 00:34:29.938292 kubelet[2510]: E0517 00:34:29.938233 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:29.984753 kubelet[2510]: E0517 00:34:29.983524 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:30.310574 containerd[1461]: time="2025-05-17T00:34:30.310405123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:30.312278 containerd[1461]: time="2025-05-17T00:34:30.312043601Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:34:30.313574 containerd[1461]: time="2025-05-17T00:34:30.313516976Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:30.317801 containerd[1461]: time="2025-05-17T00:34:30.317733484Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:30.318905 containerd[1461]: time="2025-05-17T00:34:30.318845372Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.515040498s" May 17 00:34:30.318905 containerd[1461]: time="2025-05-17T00:34:30.318900167Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:34:30.327339 containerd[1461]: time="2025-05-17T00:34:30.327274350Z" level=info msg="CreateContainer within sandbox \"f40c971b1ffa245b2ea44af1dc42ffb5915510f280f84163cf83e3c2a095f7d9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:34:30.354395 containerd[1461]: time="2025-05-17T00:34:30.354122816Z" level=info msg="CreateContainer within sandbox \"f40c971b1ffa245b2ea44af1dc42ffb5915510f280f84163cf83e3c2a095f7d9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b2cf0a9078e7e5a9259802dc3e5195b11a797b4209de2c0f43259570628580a6\"" May 17 00:34:30.354975 containerd[1461]: time="2025-05-17T00:34:30.354915430Z" level=info msg="StartContainer for \"b2cf0a9078e7e5a9259802dc3e5195b11a797b4209de2c0f43259570628580a6\"" May 17 00:34:30.404891 systemd[1]: Started cri-containerd-b2cf0a9078e7e5a9259802dc3e5195b11a797b4209de2c0f43259570628580a6.scope - libcontainer container b2cf0a9078e7e5a9259802dc3e5195b11a797b4209de2c0f43259570628580a6. May 17 00:34:30.456019 containerd[1461]: time="2025-05-17T00:34:30.455944851Z" level=info msg="StartContainer for \"b2cf0a9078e7e5a9259802dc3e5195b11a797b4209de2c0f43259570628580a6\" returns successfully" May 17 00:34:35.533433 kubelet[2510]: E0517 00:34:35.533359 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:35.562133 kubelet[2510]: I0517 00:34:35.562051 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-k7tj7" podStartSLOduration=6.044260043 podStartE2EDuration="9.562032187s" podCreationTimestamp="2025-05-17 00:34:26 +0000 UTC" firstStartedPulling="2025-05-17 00:34:26.803085446 +0000 UTC m=+5.942093729" lastFinishedPulling="2025-05-17 00:34:30.32085759 +0000 UTC m=+9.459865873" observedRunningTime="2025-05-17 00:34:31.015599871 +0000 UTC m=+10.154608184" watchObservedRunningTime="2025-05-17 00:34:35.562032187 +0000 UTC m=+14.701040480" May 17 00:34:36.004175 kubelet[2510]: E0517 00:34:36.004090 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:37.277642 sudo[1644]: pam_unix(sudo:session): session closed for user root May 17 00:34:37.284829 sshd[1641]: pam_unix(sshd:session): session closed for user core May 17 00:34:37.293233 systemd[1]: sshd@6-10.0.0.5:22-10.0.0.1:57938.service: Deactivated successfully. May 17 00:34:37.299942 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:34:37.300516 systemd[1]: session-7.scope: Consumed 5.469s CPU time, 156.8M memory peak, 0B memory swap peak. May 17 00:34:37.301830 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. May 17 00:34:37.308310 systemd-logind[1447]: Removed session 7. May 17 00:34:41.398987 systemd[1]: Created slice kubepods-besteffort-pod88aec8b6_3930_44bb_8768_f9663673c07b.slice - libcontainer container kubepods-besteffort-pod88aec8b6_3930_44bb_8768_f9663673c07b.slice. May 17 00:34:41.441634 kubelet[2510]: I0517 00:34:41.439926 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/88aec8b6-3930-44bb-8768-f9663673c07b-typha-certs\") pod \"calico-typha-54c5f49cdd-6955g\" (UID: \"88aec8b6-3930-44bb-8768-f9663673c07b\") " pod="calico-system/calico-typha-54c5f49cdd-6955g" May 17 00:34:41.441634 kubelet[2510]: I0517 00:34:41.439993 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5nk\" (UniqueName: \"kubernetes.io/projected/88aec8b6-3930-44bb-8768-f9663673c07b-kube-api-access-ch5nk\") pod \"calico-typha-54c5f49cdd-6955g\" (UID: \"88aec8b6-3930-44bb-8768-f9663673c07b\") " pod="calico-system/calico-typha-54c5f49cdd-6955g" May 17 00:34:41.441634 kubelet[2510]: I0517 00:34:41.440026 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88aec8b6-3930-44bb-8768-f9663673c07b-tigera-ca-bundle\") pod \"calico-typha-54c5f49cdd-6955g\" (UID: \"88aec8b6-3930-44bb-8768-f9663673c07b\") " pod="calico-system/calico-typha-54c5f49cdd-6955g" May 17 00:34:41.532788 systemd[1]: Created slice kubepods-besteffort-pod46a62f89_cfb9_472c_b586_1b3047b7d57e.slice - libcontainer container kubepods-besteffort-pod46a62f89_cfb9_472c_b586_1b3047b7d57e.slice. May 17 00:34:41.540759 kubelet[2510]: I0517 00:34:41.540703 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-cni-bin-dir\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.540759 kubelet[2510]: I0517 00:34:41.540747 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-cni-net-dir\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.540759 kubelet[2510]: I0517 00:34:41.540765 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-var-lib-calico\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.540990 kubelet[2510]: I0517 00:34:41.540783 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-xtables-lock\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.540990 kubelet[2510]: I0517 00:34:41.540811 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xftx\" (UniqueName: \"kubernetes.io/projected/46a62f89-cfb9-472c-b586-1b3047b7d57e-kube-api-access-4xftx\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.540990 kubelet[2510]: I0517 00:34:41.540840 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-flexvol-driver-host\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.540990 kubelet[2510]: I0517 00:34:41.540859 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-policysync\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.540990 kubelet[2510]: I0517 00:34:41.540875 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-var-run-calico\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.541245 kubelet[2510]: I0517 00:34:41.540892 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/46a62f89-cfb9-472c-b586-1b3047b7d57e-node-certs\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.541245 kubelet[2510]: I0517 00:34:41.540912 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-lib-modules\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.541245 kubelet[2510]: I0517 00:34:41.540929 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a62f89-cfb9-472c-b586-1b3047b7d57e-tigera-ca-bundle\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.541245 kubelet[2510]: I0517 00:34:41.540950 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/46a62f89-cfb9-472c-b586-1b3047b7d57e-cni-log-dir\") pod \"calico-node-2jsz6\" (UID: \"46a62f89-cfb9-472c-b586-1b3047b7d57e\") " pod="calico-system/calico-node-2jsz6" May 17 00:34:41.667691 kubelet[2510]: E0517 00:34:41.667233 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.667691 kubelet[2510]: W0517 00:34:41.667268 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.667691 kubelet[2510]: E0517 00:34:41.667316 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.668074 kubelet[2510]: E0517 00:34:41.667793 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.668074 kubelet[2510]: W0517 00:34:41.667814 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.668074 kubelet[2510]: E0517 00:34:41.667834 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.707890 kubelet[2510]: E0517 00:34:41.707837 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:41.722723 containerd[1461]: time="2025-05-17T00:34:41.721643667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54c5f49cdd-6955g,Uid:88aec8b6-3930-44bb-8768-f9663673c07b,Namespace:calico-system,Attempt:0,}" May 17 00:34:41.724028 kubelet[2510]: E0517 00:34:41.723789 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.724028 kubelet[2510]: W0517 00:34:41.724023 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.724260 kubelet[2510]: E0517 00:34:41.724057 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.751183 kubelet[2510]: E0517 00:34:41.749697 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:41.814557 containerd[1461]: time="2025-05-17T00:34:41.813555560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:41.816727 containerd[1461]: time="2025-05-17T00:34:41.816626512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:41.816727 containerd[1461]: time="2025-05-17T00:34:41.816661918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:41.816834 containerd[1461]: time="2025-05-17T00:34:41.816807994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:41.838630 kubelet[2510]: E0517 00:34:41.838569 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.838630 kubelet[2510]: W0517 00:34:41.838613 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.838630 kubelet[2510]: E0517 00:34:41.838649 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.839067 kubelet[2510]: E0517 00:34:41.838975 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.839067 kubelet[2510]: W0517 00:34:41.838987 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.839067 kubelet[2510]: E0517 00:34:41.838998 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.839703 kubelet[2510]: E0517 00:34:41.839304 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.839703 kubelet[2510]: W0517 00:34:41.839335 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.839703 kubelet[2510]: E0517 00:34:41.839361 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.842875 kubelet[2510]: E0517 00:34:41.842833 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.842875 kubelet[2510]: W0517 00:34:41.842855 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.842875 kubelet[2510]: E0517 00:34:41.842872 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.843232 kubelet[2510]: E0517 00:34:41.843225 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.843275 kubelet[2510]: W0517 00:34:41.843234 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.843275 kubelet[2510]: E0517 00:34:41.843245 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.846104 kubelet[2510]: E0517 00:34:41.846055 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.846104 kubelet[2510]: W0517 00:34:41.846077 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.846104 kubelet[2510]: E0517 00:34:41.846090 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.846804 kubelet[2510]: E0517 00:34:41.846778 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.846804 kubelet[2510]: W0517 00:34:41.846795 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.846883 kubelet[2510]: E0517 00:34:41.846808 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.847080 kubelet[2510]: E0517 00:34:41.847056 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.847080 kubelet[2510]: W0517 00:34:41.847072 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.847080 kubelet[2510]: E0517 00:34:41.847082 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.847841 kubelet[2510]: E0517 00:34:41.847814 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.847841 kubelet[2510]: W0517 00:34:41.847834 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.847983 kubelet[2510]: E0517 00:34:41.847934 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.848597 kubelet[2510]: E0517 00:34:41.848318 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.848597 kubelet[2510]: W0517 00:34:41.848330 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.848597 kubelet[2510]: E0517 00:34:41.848342 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.848721 kubelet[2510]: E0517 00:34:41.848692 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.848721 kubelet[2510]: W0517 00:34:41.848712 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.849813 containerd[1461]: time="2025-05-17T00:34:41.849655667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2jsz6,Uid:46a62f89-cfb9-472c-b586-1b3047b7d57e,Namespace:calico-system,Attempt:0,}" May 17 00:34:41.850493 kubelet[2510]: E0517 00:34:41.849730 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.856563 kubelet[2510]: E0517 00:34:41.855436 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.856563 kubelet[2510]: W0517 00:34:41.855461 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.856563 kubelet[2510]: E0517 00:34:41.855821 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.860814 systemd[1]: Started cri-containerd-2364d445ed46ae9c02d12436b69152679e3c2bd00af78cd6d95373688b5e6755.scope - libcontainer container 2364d445ed46ae9c02d12436b69152679e3c2bd00af78cd6d95373688b5e6755. May 17 00:34:41.861865 kubelet[2510]: E0517 00:34:41.861692 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.861865 kubelet[2510]: W0517 00:34:41.861722 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.861865 kubelet[2510]: E0517 00:34:41.861748 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.872588 kubelet[2510]: E0517 00:34:41.871449 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.872588 kubelet[2510]: W0517 00:34:41.871479 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.872588 kubelet[2510]: E0517 00:34:41.871527 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.874948 kubelet[2510]: E0517 00:34:41.874915 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.875040 kubelet[2510]: W0517 00:34:41.874949 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.875040 kubelet[2510]: E0517 00:34:41.874994 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.876779 kubelet[2510]: E0517 00:34:41.876623 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.876779 kubelet[2510]: W0517 00:34:41.876644 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.876779 kubelet[2510]: E0517 00:34:41.876659 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.879338 kubelet[2510]: E0517 00:34:41.878733 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.879338 kubelet[2510]: W0517 00:34:41.878762 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.879338 kubelet[2510]: E0517 00:34:41.878800 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.879338 kubelet[2510]: E0517 00:34:41.879199 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.879338 kubelet[2510]: W0517 00:34:41.879229 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.879338 kubelet[2510]: E0517 00:34:41.879282 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.881580 kubelet[2510]: E0517 00:34:41.880403 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.881580 kubelet[2510]: W0517 00:34:41.880425 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.881580 kubelet[2510]: E0517 00:34:41.880443 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.881580 kubelet[2510]: E0517 00:34:41.881361 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.881580 kubelet[2510]: W0517 00:34:41.881373 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.881580 kubelet[2510]: E0517 00:34:41.881385 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.884876 kubelet[2510]: E0517 00:34:41.884118 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.884876 kubelet[2510]: W0517 00:34:41.884140 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.884876 kubelet[2510]: E0517 00:34:41.884155 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.884876 kubelet[2510]: I0517 00:34:41.884321 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be42aafd-fcc6-4236-98b3-c64eba42cdf6-socket-dir\") pod \"csi-node-driver-x8pqj\" (UID: \"be42aafd-fcc6-4236-98b3-c64eba42cdf6\") " pod="calico-system/csi-node-driver-x8pqj" May 17 00:34:41.886114 kubelet[2510]: E0517 00:34:41.886080 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.886167 kubelet[2510]: W0517 00:34:41.886142 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.889561 kubelet[2510]: E0517 00:34:41.887941 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.889561 kubelet[2510]: E0517 00:34:41.888083 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.889561 kubelet[2510]: W0517 00:34:41.888179 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.889561 kubelet[2510]: I0517 00:34:41.888122 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be42aafd-fcc6-4236-98b3-c64eba42cdf6-registration-dir\") pod \"csi-node-driver-x8pqj\" (UID: \"be42aafd-fcc6-4236-98b3-c64eba42cdf6\") " pod="calico-system/csi-node-driver-x8pqj" May 17 00:34:41.889561 kubelet[2510]: E0517 00:34:41.888207 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.889561 kubelet[2510]: E0517 00:34:41.888862 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.889561 kubelet[2510]: W0517 00:34:41.888877 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.889561 kubelet[2510]: E0517 00:34:41.888896 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.889561 kubelet[2510]: E0517 00:34:41.889277 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.889932 kubelet[2510]: W0517 00:34:41.889289 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.889932 kubelet[2510]: E0517 00:34:41.889311 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.889932 kubelet[2510]: I0517 00:34:41.889494 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qlv\" (UniqueName: \"kubernetes.io/projected/be42aafd-fcc6-4236-98b3-c64eba42cdf6-kube-api-access-g2qlv\") pod \"csi-node-driver-x8pqj\" (UID: \"be42aafd-fcc6-4236-98b3-c64eba42cdf6\") " pod="calico-system/csi-node-driver-x8pqj" May 17 00:34:41.890135 kubelet[2510]: E0517 00:34:41.890080 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.890135 kubelet[2510]: W0517 00:34:41.890101 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.890135 kubelet[2510]: E0517 00:34:41.890113 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.891573 kubelet[2510]: E0517 00:34:41.891472 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.891573 kubelet[2510]: W0517 00:34:41.891489 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.891573 kubelet[2510]: E0517 00:34:41.891504 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.895155 kubelet[2510]: E0517 00:34:41.892057 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.895155 kubelet[2510]: W0517 00:34:41.893816 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.895155 kubelet[2510]: E0517 00:34:41.893831 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.895155 kubelet[2510]: E0517 00:34:41.894906 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.895155 kubelet[2510]: W0517 00:34:41.894918 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.895155 kubelet[2510]: E0517 00:34:41.894929 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.895852 kubelet[2510]: E0517 00:34:41.895823 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.895852 kubelet[2510]: W0517 00:34:41.895848 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.895918 kubelet[2510]: E0517 00:34:41.895862 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.895918 kubelet[2510]: I0517 00:34:41.895904 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be42aafd-fcc6-4236-98b3-c64eba42cdf6-kubelet-dir\") pod \"csi-node-driver-x8pqj\" (UID: \"be42aafd-fcc6-4236-98b3-c64eba42cdf6\") " pod="calico-system/csi-node-driver-x8pqj" May 17 00:34:41.896251 kubelet[2510]: E0517 00:34:41.896226 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.896251 kubelet[2510]: W0517 00:34:41.896246 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.896316 kubelet[2510]: E0517 00:34:41.896275 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.896316 kubelet[2510]: I0517 00:34:41.896297 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/be42aafd-fcc6-4236-98b3-c64eba42cdf6-varrun\") pod \"csi-node-driver-x8pqj\" (UID: \"be42aafd-fcc6-4236-98b3-c64eba42cdf6\") " pod="calico-system/csi-node-driver-x8pqj" May 17 00:34:41.896674 kubelet[2510]: E0517 00:34:41.896643 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.896674 kubelet[2510]: W0517 00:34:41.896669 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.896762 kubelet[2510]: E0517 00:34:41.896700 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.898629 kubelet[2510]: E0517 00:34:41.897094 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.898629 kubelet[2510]: W0517 00:34:41.897111 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.898629 kubelet[2510]: E0517 00:34:41.897138 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.898629 kubelet[2510]: E0517 00:34:41.897424 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.898629 kubelet[2510]: W0517 00:34:41.897436 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.898629 kubelet[2510]: E0517 00:34:41.897448 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.898629 kubelet[2510]: E0517 00:34:41.897727 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.898629 kubelet[2510]: W0517 00:34:41.897740 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.898629 kubelet[2510]: E0517 00:34:41.897754 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.969193 containerd[1461]: time="2025-05-17T00:34:41.968495689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:41.969193 containerd[1461]: time="2025-05-17T00:34:41.968640703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:41.969193 containerd[1461]: time="2025-05-17T00:34:41.968657094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:41.975459 containerd[1461]: time="2025-05-17T00:34:41.969044064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:41.982195 containerd[1461]: time="2025-05-17T00:34:41.980975678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54c5f49cdd-6955g,Uid:88aec8b6-3930-44bb-8768-f9663673c07b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2364d445ed46ae9c02d12436b69152679e3c2bd00af78cd6d95373688b5e6755\"" May 17 00:34:41.987374 kubelet[2510]: E0517 00:34:41.987169 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:41.997254 containerd[1461]: time="2025-05-17T00:34:41.996573967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:34:41.999003 kubelet[2510]: E0517 00:34:41.998273 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.999003 kubelet[2510]: W0517 00:34:41.998296 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:41.999003 kubelet[2510]: E0517 00:34:41.998318 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:41.999979 kubelet[2510]: E0517 00:34:41.999952 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:41.999979 kubelet[2510]: W0517 00:34:41.999974 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.000080 kubelet[2510]: E0517 00:34:41.999992 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.001291 kubelet[2510]: E0517 00:34:42.000638 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.001291 kubelet[2510]: W0517 00:34:42.000653 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.001291 kubelet[2510]: E0517 00:34:42.000784 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.001291 kubelet[2510]: E0517 00:34:42.001060 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.001291 kubelet[2510]: W0517 00:34:42.001071 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.001291 kubelet[2510]: E0517 00:34:42.001162 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.001487 kubelet[2510]: E0517 00:34:42.001363 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.001487 kubelet[2510]: W0517 00:34:42.001375 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.001487 kubelet[2510]: E0517 00:34:42.001392 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.002407 kubelet[2510]: E0517 00:34:42.001656 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.002407 kubelet[2510]: W0517 00:34:42.001700 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.002407 kubelet[2510]: E0517 00:34:42.001721 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.002407 kubelet[2510]: E0517 00:34:42.001965 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.002407 kubelet[2510]: W0517 00:34:42.001976 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.002407 kubelet[2510]: E0517 00:34:42.002003 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.005891 kubelet[2510]: E0517 00:34:42.005826 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.005891 kubelet[2510]: W0517 00:34:42.005858 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.005891 kubelet[2510]: E0517 00:34:42.005886 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.006345 kubelet[2510]: E0517 00:34:42.006297 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.006345 kubelet[2510]: W0517 00:34:42.006326 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.006419 kubelet[2510]: E0517 00:34:42.006376 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.009516 kubelet[2510]: E0517 00:34:42.007138 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.009516 kubelet[2510]: W0517 00:34:42.007157 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.009516 kubelet[2510]: E0517 00:34:42.007275 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.010883 kubelet[2510]: E0517 00:34:42.010385 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.010883 kubelet[2510]: W0517 00:34:42.010417 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.013047 kubelet[2510]: E0517 00:34:42.011172 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.013047 kubelet[2510]: W0517 00:34:42.011201 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.013047 kubelet[2510]: E0517 00:34:42.011348 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.013047 kubelet[2510]: E0517 00:34:42.011520 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.016656 kubelet[2510]: E0517 00:34:42.015875 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.016656 kubelet[2510]: W0517 00:34:42.015903 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.016656 kubelet[2510]: E0517 00:34:42.016231 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.016656 kubelet[2510]: E0517 00:34:42.016661 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.016901 kubelet[2510]: W0517 00:34:42.016674 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.017140 kubelet[2510]: E0517 00:34:42.017076 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.017458 kubelet[2510]: E0517 00:34:42.017392 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.017506 kubelet[2510]: W0517 00:34:42.017447 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.017865 kubelet[2510]: E0517 00:34:42.017803 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.018247 kubelet[2510]: E0517 00:34:42.018106 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.018247 kubelet[2510]: W0517 00:34:42.018123 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.018883 kubelet[2510]: E0517 00:34:42.018513 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.018883 kubelet[2510]: W0517 00:34:42.018545 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.018883 kubelet[2510]: E0517 00:34:42.018737 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.018883 kubelet[2510]: E0517 00:34:42.018788 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.019827 systemd[1]: Started cri-containerd-85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2.scope - libcontainer container 85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2. May 17 00:34:42.020907 kubelet[2510]: E0517 00:34:42.020758 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.020907 kubelet[2510]: W0517 00:34:42.020807 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.023554 kubelet[2510]: E0517 00:34:42.023490 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.023554 kubelet[2510]: W0517 00:34:42.023520 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.024036 kubelet[2510]: E0517 00:34:42.023913 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.024036 kubelet[2510]: W0517 00:34:42.023932 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.024213 kubelet[2510]: E0517 00:34:42.024152 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.024213 kubelet[2510]: W0517 00:34:42.024169 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.024213 kubelet[2510]: E0517 00:34:42.024197 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.024571 kubelet[2510]: E0517 00:34:42.024446 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.024571 kubelet[2510]: W0517 00:34:42.024468 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.024571 kubelet[2510]: E0517 00:34:42.024480 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.024571 kubelet[2510]: E0517 00:34:42.024510 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.024926 kubelet[2510]: E0517 00:34:42.024817 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.024926 kubelet[2510]: W0517 00:34:42.024840 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.024926 kubelet[2510]: E0517 00:34:42.024852 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.025555 kubelet[2510]: E0517 00:34:42.025143 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.025555 kubelet[2510]: E0517 00:34:42.025174 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.025645 kubelet[2510]: E0517 00:34:42.025569 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.025645 kubelet[2510]: W0517 00:34:42.025586 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.025645 kubelet[2510]: E0517 00:34:42.025621 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.026081 kubelet[2510]: E0517 00:34:42.026043 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.026140 kubelet[2510]: W0517 00:34:42.026111 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.026173 kubelet[2510]: E0517 00:34:42.026144 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.042624 kubelet[2510]: E0517 00:34:42.042405 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:42.042624 kubelet[2510]: W0517 00:34:42.042436 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:42.042624 kubelet[2510]: E0517 00:34:42.042460 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:42.081636 containerd[1461]: time="2025-05-17T00:34:42.081562131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2jsz6,Uid:46a62f89-cfb9-472c-b586-1b3047b7d57e,Namespace:calico-system,Attempt:0,} returns sandbox id \"85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2\"" May 17 00:34:42.941925 kubelet[2510]: E0517 00:34:42.940505 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:43.566639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989686721.mount: Deactivated successfully. May 17 00:34:44.493752 containerd[1461]: time="2025-05-17T00:34:44.493357784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:44.496726 containerd[1461]: time="2025-05-17T00:34:44.496611805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:34:44.498692 containerd[1461]: time="2025-05-17T00:34:44.498625209Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:44.504641 containerd[1461]: time="2025-05-17T00:34:44.504510668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:44.505403 containerd[1461]: time="2025-05-17T00:34:44.505336965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.508670203s" May 17 00:34:44.505403 containerd[1461]: time="2025-05-17T00:34:44.505390767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:34:44.558078 containerd[1461]: time="2025-05-17T00:34:44.557895519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:34:44.714697 containerd[1461]: time="2025-05-17T00:34:44.714594369Z" level=info msg="CreateContainer within sandbox \"2364d445ed46ae9c02d12436b69152679e3c2bd00af78cd6d95373688b5e6755\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:34:44.749704 containerd[1461]: time="2025-05-17T00:34:44.749566956Z" level=info msg="CreateContainer within sandbox \"2364d445ed46ae9c02d12436b69152679e3c2bd00af78cd6d95373688b5e6755\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d0c23b3326f2115d763c30eaad20ae3691ece88e1bec6e11a5619fba040e7640\"" May 17 00:34:44.750455 containerd[1461]: time="2025-05-17T00:34:44.750159923Z" level=info msg="StartContainer for \"d0c23b3326f2115d763c30eaad20ae3691ece88e1bec6e11a5619fba040e7640\"" May 17 00:34:44.787826 systemd[1]: Started cri-containerd-d0c23b3326f2115d763c30eaad20ae3691ece88e1bec6e11a5619fba040e7640.scope - libcontainer container d0c23b3326f2115d763c30eaad20ae3691ece88e1bec6e11a5619fba040e7640. May 17 00:34:44.894240 containerd[1461]: time="2025-05-17T00:34:44.894179082Z" level=info msg="StartContainer for \"d0c23b3326f2115d763c30eaad20ae3691ece88e1bec6e11a5619fba040e7640\" returns successfully" May 17 00:34:44.943428 kubelet[2510]: E0517 00:34:44.943259 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:45.052477 kubelet[2510]: E0517 00:34:45.052263 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:45.081275 kubelet[2510]: I0517 00:34:45.081063 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54c5f49cdd-6955g" podStartSLOduration=1.531532535 podStartE2EDuration="4.081030995s" podCreationTimestamp="2025-05-17 00:34:41 +0000 UTC" firstStartedPulling="2025-05-17 00:34:41.992791664 +0000 UTC m=+21.131799947" lastFinishedPulling="2025-05-17 00:34:44.542290124 +0000 UTC m=+23.681298407" observedRunningTime="2025-05-17 00:34:45.080564487 +0000 UTC m=+24.219572770" watchObservedRunningTime="2025-05-17 00:34:45.081030995 +0000 UTC m=+24.220039278" May 17 00:34:45.116972 kubelet[2510]: E0517 00:34:45.116915 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.116972 kubelet[2510]: W0517 00:34:45.116953 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.120170 kubelet[2510]: E0517 00:34:45.120131 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.120483 kubelet[2510]: E0517 00:34:45.120458 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.120483 kubelet[2510]: W0517 00:34:45.120473 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.120611 kubelet[2510]: E0517 00:34:45.120488 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.120748 kubelet[2510]: E0517 00:34:45.120720 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.120748 kubelet[2510]: W0517 00:34:45.120736 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.120748 kubelet[2510]: E0517 00:34:45.120747 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.120981 kubelet[2510]: E0517 00:34:45.120963 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.120981 kubelet[2510]: W0517 00:34:45.120973 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.121054 kubelet[2510]: E0517 00:34:45.120983 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.121312 kubelet[2510]: E0517 00:34:45.121280 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.121365 kubelet[2510]: W0517 00:34:45.121310 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.121365 kubelet[2510]: E0517 00:34:45.121341 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.121649 kubelet[2510]: E0517 00:34:45.121631 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.121649 kubelet[2510]: W0517 00:34:45.121645 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.121737 kubelet[2510]: E0517 00:34:45.121655 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.122001 kubelet[2510]: E0517 00:34:45.121982 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.122001 kubelet[2510]: W0517 00:34:45.121997 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.122117 kubelet[2510]: E0517 00:34:45.122011 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.122736 kubelet[2510]: E0517 00:34:45.122718 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.122736 kubelet[2510]: W0517 00:34:45.122731 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.122823 kubelet[2510]: E0517 00:34:45.122755 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.123215 kubelet[2510]: E0517 00:34:45.123195 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.123215 kubelet[2510]: W0517 00:34:45.123209 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.123297 kubelet[2510]: E0517 00:34:45.123219 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.123491 kubelet[2510]: E0517 00:34:45.123471 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.123491 kubelet[2510]: W0517 00:34:45.123485 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.123640 kubelet[2510]: E0517 00:34:45.123496 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.123774 kubelet[2510]: E0517 00:34:45.123757 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.123774 kubelet[2510]: W0517 00:34:45.123770 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.123869 kubelet[2510]: E0517 00:34:45.123780 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.124024 kubelet[2510]: E0517 00:34:45.123995 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.124024 kubelet[2510]: W0517 00:34:45.124011 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.124024 kubelet[2510]: E0517 00:34:45.124020 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.124276 kubelet[2510]: E0517 00:34:45.124246 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.124276 kubelet[2510]: W0517 00:34:45.124257 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.124276 kubelet[2510]: E0517 00:34:45.124266 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.124510 kubelet[2510]: E0517 00:34:45.124482 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.124510 kubelet[2510]: W0517 00:34:45.124499 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.124510 kubelet[2510]: E0517 00:34:45.124508 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.124747 kubelet[2510]: E0517 00:34:45.124732 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.124747 kubelet[2510]: W0517 00:34:45.124744 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.124832 kubelet[2510]: E0517 00:34:45.124753 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.132447 kubelet[2510]: E0517 00:34:45.132105 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.132447 kubelet[2510]: W0517 00:34:45.132149 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.132447 kubelet[2510]: E0517 00:34:45.132174 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.132447 kubelet[2510]: E0517 00:34:45.132419 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.132447 kubelet[2510]: W0517 00:34:45.132432 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.132447 kubelet[2510]: E0517 00:34:45.132456 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.132865 kubelet[2510]: E0517 00:34:45.132829 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.132865 kubelet[2510]: W0517 00:34:45.132850 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.132956 kubelet[2510]: E0517 00:34:45.132872 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.133198 kubelet[2510]: E0517 00:34:45.133167 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.133198 kubelet[2510]: W0517 00:34:45.133186 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.133279 kubelet[2510]: E0517 00:34:45.133205 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.134262 kubelet[2510]: E0517 00:34:45.134229 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.134262 kubelet[2510]: W0517 00:34:45.134248 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.134353 kubelet[2510]: E0517 00:34:45.134282 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.134640 kubelet[2510]: E0517 00:34:45.134541 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.134640 kubelet[2510]: W0517 00:34:45.134554 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.134640 kubelet[2510]: E0517 00:34:45.134592 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.134799 kubelet[2510]: E0517 00:34:45.134782 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.134799 kubelet[2510]: W0517 00:34:45.134795 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.134873 kubelet[2510]: E0517 00:34:45.134836 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.135003 kubelet[2510]: E0517 00:34:45.134973 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.135003 kubelet[2510]: W0517 00:34:45.134986 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.135080 kubelet[2510]: E0517 00:34:45.135003 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.135344 kubelet[2510]: E0517 00:34:45.135324 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.135344 kubelet[2510]: W0517 00:34:45.135337 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.135418 kubelet[2510]: E0517 00:34:45.135354 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.135683 kubelet[2510]: E0517 00:34:45.135655 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.135683 kubelet[2510]: W0517 00:34:45.135670 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.135760 kubelet[2510]: E0517 00:34:45.135689 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.135932 kubelet[2510]: E0517 00:34:45.135911 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.135932 kubelet[2510]: W0517 00:34:45.135923 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.135998 kubelet[2510]: E0517 00:34:45.135935 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.136205 kubelet[2510]: E0517 00:34:45.136181 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.136205 kubelet[2510]: W0517 00:34:45.136196 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.136275 kubelet[2510]: E0517 00:34:45.136214 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.136597 kubelet[2510]: E0517 00:34:45.136559 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.136597 kubelet[2510]: W0517 00:34:45.136589 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.136667 kubelet[2510]: E0517 00:34:45.136621 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.136897 kubelet[2510]: E0517 00:34:45.136871 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.136897 kubelet[2510]: W0517 00:34:45.136884 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.136965 kubelet[2510]: E0517 00:34:45.136899 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.137157 kubelet[2510]: E0517 00:34:45.137132 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.137157 kubelet[2510]: W0517 00:34:45.137145 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.137219 kubelet[2510]: E0517 00:34:45.137159 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.137436 kubelet[2510]: E0517 00:34:45.137411 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.137436 kubelet[2510]: W0517 00:34:45.137427 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.137509 kubelet[2510]: E0517 00:34:45.137442 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.137829 kubelet[2510]: E0517 00:34:45.137807 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.137829 kubelet[2510]: W0517 00:34:45.137822 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.137909 kubelet[2510]: E0517 00:34:45.137844 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:45.138127 kubelet[2510]: E0517 00:34:45.138099 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:45.138127 kubelet[2510]: W0517 00:34:45.138112 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:45.138200 kubelet[2510]: E0517 00:34:45.138159 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.049714 kubelet[2510]: I0517 00:34:46.049640 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:34:46.050334 kubelet[2510]: E0517 00:34:46.050039 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:46.132909 kubelet[2510]: E0517 00:34:46.132874 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.132909 kubelet[2510]: W0517 00:34:46.132901 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.133092 kubelet[2510]: E0517 00:34:46.132926 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.133330 kubelet[2510]: E0517 00:34:46.133315 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.133330 kubelet[2510]: W0517 00:34:46.133328 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.133413 kubelet[2510]: E0517 00:34:46.133338 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.133664 kubelet[2510]: E0517 00:34:46.133649 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.133664 kubelet[2510]: W0517 00:34:46.133661 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.133745 kubelet[2510]: E0517 00:34:46.133672 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.133943 kubelet[2510]: E0517 00:34:46.133930 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.133943 kubelet[2510]: W0517 00:34:46.133942 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.134018 kubelet[2510]: E0517 00:34:46.133951 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.134186 kubelet[2510]: E0517 00:34:46.134173 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.134186 kubelet[2510]: W0517 00:34:46.134184 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.134318 kubelet[2510]: E0517 00:34:46.134194 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.134403 kubelet[2510]: E0517 00:34:46.134391 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.134403 kubelet[2510]: W0517 00:34:46.134401 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.134463 kubelet[2510]: E0517 00:34:46.134411 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.134678 kubelet[2510]: E0517 00:34:46.134658 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.134678 kubelet[2510]: W0517 00:34:46.134670 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.134751 kubelet[2510]: E0517 00:34:46.134680 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.134902 kubelet[2510]: E0517 00:34:46.134890 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.134938 kubelet[2510]: W0517 00:34:46.134901 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.134938 kubelet[2510]: E0517 00:34:46.134913 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.135320 kubelet[2510]: E0517 00:34:46.135302 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.135368 kubelet[2510]: W0517 00:34:46.135320 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.135368 kubelet[2510]: E0517 00:34:46.135332 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.135588 kubelet[2510]: E0517 00:34:46.135575 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.135627 kubelet[2510]: W0517 00:34:46.135587 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.135627 kubelet[2510]: E0517 00:34:46.135598 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.135820 kubelet[2510]: E0517 00:34:46.135807 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.135820 kubelet[2510]: W0517 00:34:46.135818 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.135878 kubelet[2510]: E0517 00:34:46.135828 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.136032 kubelet[2510]: E0517 00:34:46.136020 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.136060 kubelet[2510]: W0517 00:34:46.136032 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.136060 kubelet[2510]: E0517 00:34:46.136042 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.136322 kubelet[2510]: E0517 00:34:46.136304 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.136322 kubelet[2510]: W0517 00:34:46.136316 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.136376 kubelet[2510]: E0517 00:34:46.136325 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.136542 kubelet[2510]: E0517 00:34:46.136520 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.136574 kubelet[2510]: W0517 00:34:46.136545 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.136574 kubelet[2510]: E0517 00:34:46.136555 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.136799 kubelet[2510]: E0517 00:34:46.136785 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.136799 kubelet[2510]: W0517 00:34:46.136796 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.136854 kubelet[2510]: E0517 00:34:46.136807 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.140219 kubelet[2510]: E0517 00:34:46.140086 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.140219 kubelet[2510]: W0517 00:34:46.140111 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.140219 kubelet[2510]: E0517 00:34:46.140123 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.140686 kubelet[2510]: E0517 00:34:46.140651 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.140751 kubelet[2510]: W0517 00:34:46.140684 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.140751 kubelet[2510]: E0517 00:34:46.140716 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.141059 kubelet[2510]: E0517 00:34:46.141033 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.141059 kubelet[2510]: W0517 00:34:46.141048 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.141139 kubelet[2510]: E0517 00:34:46.141063 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.141350 kubelet[2510]: E0517 00:34:46.141323 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.141350 kubelet[2510]: W0517 00:34:46.141338 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.141424 kubelet[2510]: E0517 00:34:46.141353 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.141656 kubelet[2510]: E0517 00:34:46.141629 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.141656 kubelet[2510]: W0517 00:34:46.141645 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.141840 kubelet[2510]: E0517 00:34:46.141681 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.141908 kubelet[2510]: E0517 00:34:46.141889 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.141908 kubelet[2510]: W0517 00:34:46.141903 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.142073 kubelet[2510]: E0517 00:34:46.141963 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.142230 kubelet[2510]: E0517 00:34:46.142202 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.142230 kubelet[2510]: W0517 00:34:46.142219 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.142418 kubelet[2510]: E0517 00:34:46.142260 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.142493 kubelet[2510]: E0517 00:34:46.142475 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.142493 kubelet[2510]: W0517 00:34:46.142489 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.142660 kubelet[2510]: E0517 00:34:46.142507 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.142975 kubelet[2510]: E0517 00:34:46.142953 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.142975 kubelet[2510]: W0517 00:34:46.142972 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.143082 kubelet[2510]: E0517 00:34:46.142992 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.143678 kubelet[2510]: E0517 00:34:46.143652 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.143678 kubelet[2510]: W0517 00:34:46.143669 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.143798 kubelet[2510]: E0517 00:34:46.143772 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.143992 kubelet[2510]: E0517 00:34:46.143969 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.143992 kubelet[2510]: W0517 00:34:46.143989 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.144114 kubelet[2510]: E0517 00:34:46.144024 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.144289 kubelet[2510]: E0517 00:34:46.144267 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.144289 kubelet[2510]: W0517 00:34:46.144283 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.144404 kubelet[2510]: E0517 00:34:46.144302 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.144614 kubelet[2510]: E0517 00:34:46.144597 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.144690 kubelet[2510]: W0517 00:34:46.144668 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.144749 kubelet[2510]: E0517 00:34:46.144694 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.145004 kubelet[2510]: E0517 00:34:46.144975 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.145004 kubelet[2510]: W0517 00:34:46.144990 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.145111 kubelet[2510]: E0517 00:34:46.145009 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.145503 kubelet[2510]: E0517 00:34:46.145470 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.145503 kubelet[2510]: W0517 00:34:46.145499 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.145614 kubelet[2510]: E0517 00:34:46.145521 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.145813 kubelet[2510]: E0517 00:34:46.145793 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.145813 kubelet[2510]: W0517 00:34:46.145808 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.145888 kubelet[2510]: E0517 00:34:46.145827 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.146073 kubelet[2510]: E0517 00:34:46.146055 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.146073 kubelet[2510]: W0517 00:34:46.146068 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.146152 kubelet[2510]: E0517 00:34:46.146080 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.146291 kubelet[2510]: E0517 00:34:46.146273 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:46.146291 kubelet[2510]: W0517 00:34:46.146287 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:46.146348 kubelet[2510]: E0517 00:34:46.146298 2510 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:46.372714 containerd[1461]: time="2025-05-17T00:34:46.372587622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:46.374780 containerd[1461]: time="2025-05-17T00:34:46.374709939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:34:46.379869 containerd[1461]: time="2025-05-17T00:34:46.379810495Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:46.382345 containerd[1461]: time="2025-05-17T00:34:46.382299903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:46.383110 containerd[1461]: time="2025-05-17T00:34:46.383042962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.82508682s" May 17 00:34:46.383171 containerd[1461]: time="2025-05-17T00:34:46.383113705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:34:46.385495 containerd[1461]: time="2025-05-17T00:34:46.385460365Z" level=info msg="CreateContainer within sandbox \"85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:34:46.405800 containerd[1461]: time="2025-05-17T00:34:46.405736158Z" level=info msg="CreateContainer within sandbox \"85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733\"" May 17 00:34:46.406585 containerd[1461]: time="2025-05-17T00:34:46.406550071Z" level=info msg="StartContainer for \"ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733\"" May 17 00:34:46.446745 systemd[1]: Started cri-containerd-ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733.scope - libcontainer container ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733. May 17 00:34:46.480148 containerd[1461]: time="2025-05-17T00:34:46.480105993Z" level=info msg="StartContainer for \"ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733\" returns successfully" May 17 00:34:46.491329 systemd[1]: cri-containerd-ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733.scope: Deactivated successfully. May 17 00:34:46.609355 containerd[1461]: time="2025-05-17T00:34:46.609277150Z" level=info msg="shim disconnected" id=ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733 namespace=k8s.io May 17 00:34:46.609355 containerd[1461]: time="2025-05-17T00:34:46.609345960Z" level=warning msg="cleaning up after shim disconnected" id=ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733 namespace=k8s.io May 17 00:34:46.609355 containerd[1461]: time="2025-05-17T00:34:46.609358874Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:34:46.703462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceabb475108441bd31ea480618edcf81886931a34f5072dc1e41656d7926d733-rootfs.mount: Deactivated successfully. May 17 00:34:46.937577 kubelet[2510]: E0517 00:34:46.937447 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:47.055728 containerd[1461]: time="2025-05-17T00:34:47.055436351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:34:48.937617 kubelet[2510]: E0517 00:34:48.937558 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:50.937750 kubelet[2510]: E0517 00:34:50.937710 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:51.281832 containerd[1461]: time="2025-05-17T00:34:51.281709710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:51.283209 containerd[1461]: time="2025-05-17T00:34:51.283165429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:34:51.284570 containerd[1461]: time="2025-05-17T00:34:51.284432793Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:51.286791 containerd[1461]: time="2025-05-17T00:34:51.286752927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:34:51.287461 containerd[1461]: time="2025-05-17T00:34:51.287426434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.231928297s" May 17 00:34:51.287510 containerd[1461]: time="2025-05-17T00:34:51.287458344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:34:51.290842 containerd[1461]: time="2025-05-17T00:34:51.290808436Z" level=info msg="CreateContainer within sandbox \"85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:34:51.316953 containerd[1461]: time="2025-05-17T00:34:51.316902743Z" level=info msg="CreateContainer within sandbox \"85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066\"" May 17 00:34:51.317513 containerd[1461]: time="2025-05-17T00:34:51.317479980Z" level=info msg="StartContainer for \"471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066\"" May 17 00:34:51.360337 systemd[1]: Started cri-containerd-471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066.scope - libcontainer container 471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066. May 17 00:34:51.547475 containerd[1461]: time="2025-05-17T00:34:51.547330232Z" level=info msg="StartContainer for \"471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066\" returns successfully" May 17 00:34:52.938128 kubelet[2510]: E0517 00:34:52.938029 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:53.673216 systemd[1]: cri-containerd-471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066.scope: Deactivated successfully. May 17 00:34:53.697228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066-rootfs.mount: Deactivated successfully. May 17 00:34:53.718518 kubelet[2510]: I0517 00:34:53.712271 2510 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:34:53.883213 containerd[1461]: time="2025-05-17T00:34:53.883134489Z" level=info msg="shim disconnected" id=471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066 namespace=k8s.io May 17 00:34:53.883213 containerd[1461]: time="2025-05-17T00:34:53.883207767Z" level=warning msg="cleaning up after shim disconnected" id=471eef102514ef024962f8dbe2297c17b7dab46781d5e7af7d3105b399c4f066 namespace=k8s.io May 17 00:34:53.883213 containerd[1461]: time="2025-05-17T00:34:53.883218037Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:34:53.945724 systemd[1]: Created slice kubepods-besteffort-podbef19986_6d7f_4327_9173_74879321bea4.slice - libcontainer container kubepods-besteffort-podbef19986_6d7f_4327_9173_74879321bea4.slice. May 17 00:34:53.985416 systemd[1]: Created slice kubepods-besteffort-poda5f3228d_9614_4416_8a88_802cb784679f.slice - libcontainer container kubepods-besteffort-poda5f3228d_9614_4416_8a88_802cb784679f.slice. May 17 00:34:53.990342 systemd[1]: Created slice kubepods-besteffort-pod9537133f_5e07_4b0f_93c4_cc1221685e83.slice - libcontainer container kubepods-besteffort-pod9537133f_5e07_4b0f_93c4_cc1221685e83.slice. May 17 00:34:53.992619 kubelet[2510]: I0517 00:34:53.992583 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkhq\" (UniqueName: \"kubernetes.io/projected/a5f3228d-9614-4416-8a88-802cb784679f-kube-api-access-prkhq\") pod \"calico-apiserver-698b9c5d64-9t9c6\" (UID: \"a5f3228d-9614-4416-8a88-802cb784679f\") " pod="calico-apiserver/calico-apiserver-698b9c5d64-9t9c6" May 17 00:34:53.993187 kubelet[2510]: I0517 00:34:53.992620 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cf92987-bd0b-472f-a9b0-2d45c7497558-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-vss2s\" (UID: \"1cf92987-bd0b-472f-a9b0-2d45c7497558\") " pod="calico-system/goldmane-78d55f7ddc-vss2s" May 17 00:34:53.993187 kubelet[2510]: I0517 00:34:53.992650 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c975f9da-6c98-4900-bbd2-08541503e92e-config-volume\") pod \"coredns-668d6bf9bc-fmrv9\" (UID: \"c975f9da-6c98-4900-bbd2-08541503e92e\") " pod="kube-system/coredns-668d6bf9bc-fmrv9" May 17 00:34:53.993187 kubelet[2510]: I0517 00:34:53.992677 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1cf92987-bd0b-472f-a9b0-2d45c7497558-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-vss2s\" (UID: \"1cf92987-bd0b-472f-a9b0-2d45c7497558\") " pod="calico-system/goldmane-78d55f7ddc-vss2s" May 17 00:34:53.993187 kubelet[2510]: I0517 00:34:53.992699 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bef19986-6d7f-4327-9173-74879321bea4-calico-apiserver-certs\") pod \"calico-apiserver-698b9c5d64-6nfp6\" (UID: \"bef19986-6d7f-4327-9173-74879321bea4\") " pod="calico-apiserver/calico-apiserver-698b9c5d64-6nfp6" May 17 00:34:53.993187 kubelet[2510]: I0517 00:34:53.992720 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmkb\" (UniqueName: \"kubernetes.io/projected/bef19986-6d7f-4327-9173-74879321bea4-kube-api-access-rcmkb\") pod \"calico-apiserver-698b9c5d64-6nfp6\" (UID: \"bef19986-6d7f-4327-9173-74879321bea4\") " pod="calico-apiserver/calico-apiserver-698b9c5d64-6nfp6" May 17 00:34:53.993372 kubelet[2510]: I0517 00:34:53.992766 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9537133f-5e07-4b0f-93c4-cc1221685e83-tigera-ca-bundle\") pod \"calico-kube-controllers-66b4cdbc55-74hhx\" (UID: \"9537133f-5e07-4b0f-93c4-cc1221685e83\") " pod="calico-system/calico-kube-controllers-66b4cdbc55-74hhx" May 17 00:34:53.993372 kubelet[2510]: I0517 00:34:53.992803 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5f3228d-9614-4416-8a88-802cb784679f-calico-apiserver-certs\") pod \"calico-apiserver-698b9c5d64-9t9c6\" (UID: \"a5f3228d-9614-4416-8a88-802cb784679f\") " pod="calico-apiserver/calico-apiserver-698b9c5d64-9t9c6" May 17 00:34:53.993372 kubelet[2510]: I0517 00:34:53.992835 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjm2g\" (UniqueName: \"kubernetes.io/projected/b0b2a5ee-c039-427e-9a8b-ca7df66976a4-kube-api-access-wjm2g\") pod \"coredns-668d6bf9bc-wd2nk\" (UID: \"b0b2a5ee-c039-427e-9a8b-ca7df66976a4\") " pod="kube-system/coredns-668d6bf9bc-wd2nk" May 17 00:34:53.993372 kubelet[2510]: I0517 00:34:53.992855 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-backend-key-pair\") pod \"whisker-5d98bcff46-jh685\" (UID: \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\") " pod="calico-system/whisker-5d98bcff46-jh685" May 17 00:34:53.993372 kubelet[2510]: I0517 00:34:53.992874 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-ca-bundle\") pod \"whisker-5d98bcff46-jh685\" (UID: \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\") " pod="calico-system/whisker-5d98bcff46-jh685" May 17 00:34:53.993554 kubelet[2510]: I0517 00:34:53.992897 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0b2a5ee-c039-427e-9a8b-ca7df66976a4-config-volume\") pod \"coredns-668d6bf9bc-wd2nk\" (UID: \"b0b2a5ee-c039-427e-9a8b-ca7df66976a4\") " pod="kube-system/coredns-668d6bf9bc-wd2nk" May 17 00:34:53.993554 kubelet[2510]: I0517 00:34:53.992929 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4w9m\" (UniqueName: \"kubernetes.io/projected/1cf92987-bd0b-472f-a9b0-2d45c7497558-kube-api-access-b4w9m\") pod \"goldmane-78d55f7ddc-vss2s\" (UID: \"1cf92987-bd0b-472f-a9b0-2d45c7497558\") " pod="calico-system/goldmane-78d55f7ddc-vss2s" May 17 00:34:53.993554 kubelet[2510]: I0517 00:34:53.992953 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4gwc\" (UniqueName: \"kubernetes.io/projected/c975f9da-6c98-4900-bbd2-08541503e92e-kube-api-access-j4gwc\") pod \"coredns-668d6bf9bc-fmrv9\" (UID: \"c975f9da-6c98-4900-bbd2-08541503e92e\") " pod="kube-system/coredns-668d6bf9bc-fmrv9" May 17 00:34:53.993554 kubelet[2510]: I0517 00:34:53.992976 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf92987-bd0b-472f-a9b0-2d45c7497558-config\") pod \"goldmane-78d55f7ddc-vss2s\" (UID: \"1cf92987-bd0b-472f-a9b0-2d45c7497558\") " pod="calico-system/goldmane-78d55f7ddc-vss2s" May 17 00:34:53.993554 kubelet[2510]: I0517 00:34:53.992995 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9rq\" (UniqueName: \"kubernetes.io/projected/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-kube-api-access-lf9rq\") pod \"whisker-5d98bcff46-jh685\" (UID: \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\") " pod="calico-system/whisker-5d98bcff46-jh685" May 17 00:34:53.993712 kubelet[2510]: I0517 00:34:53.993021 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkk4h\" (UniqueName: \"kubernetes.io/projected/9537133f-5e07-4b0f-93c4-cc1221685e83-kube-api-access-bkk4h\") pod \"calico-kube-controllers-66b4cdbc55-74hhx\" (UID: \"9537133f-5e07-4b0f-93c4-cc1221685e83\") " pod="calico-system/calico-kube-controllers-66b4cdbc55-74hhx" May 17 00:34:53.995884 systemd[1]: Created slice kubepods-burstable-podc975f9da_6c98_4900_bbd2_08541503e92e.slice - libcontainer container kubepods-burstable-podc975f9da_6c98_4900_bbd2_08541503e92e.slice. May 17 00:34:53.999781 systemd[1]: Created slice kubepods-burstable-podb0b2a5ee_c039_427e_9a8b_ca7df66976a4.slice - libcontainer container kubepods-burstable-podb0b2a5ee_c039_427e_9a8b_ca7df66976a4.slice. May 17 00:34:54.004513 systemd[1]: Created slice kubepods-besteffort-pod1cf92987_bd0b_472f_a9b0_2d45c7497558.slice - libcontainer container kubepods-besteffort-pod1cf92987_bd0b_472f_a9b0_2d45c7497558.slice. May 17 00:34:54.009194 systemd[1]: Created slice kubepods-besteffort-pod7152e1a8_ee2c_4e70_b6cf_0017356b00dc.slice - libcontainer container kubepods-besteffort-pod7152e1a8_ee2c_4e70_b6cf_0017356b00dc.slice. May 17 00:34:54.091933 containerd[1461]: time="2025-05-17T00:34:54.091882521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:34:54.298212 kubelet[2510]: E0517 00:34:54.298108 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:54.299936 containerd[1461]: time="2025-05-17T00:34:54.299456279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmrv9,Uid:c975f9da-6c98-4900-bbd2-08541503e92e,Namespace:kube-system,Attempt:0,}" May 17 00:34:54.302355 kubelet[2510]: E0517 00:34:54.302330 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:54.302817 containerd[1461]: time="2025-05-17T00:34:54.302641018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wd2nk,Uid:b0b2a5ee-c039-427e-9a8b-ca7df66976a4,Namespace:kube-system,Attempt:0,}" May 17 00:34:54.307516 containerd[1461]: time="2025-05-17T00:34:54.307460398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vss2s,Uid:1cf92987-bd0b-472f-a9b0-2d45c7497558,Namespace:calico-system,Attempt:0,}" May 17 00:34:54.311793 containerd[1461]: time="2025-05-17T00:34:54.311751247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d98bcff46-jh685,Uid:7152e1a8-ee2c-4e70-b6cf-0017356b00dc,Namespace:calico-system,Attempt:0,}" May 17 00:34:54.548505 containerd[1461]: time="2025-05-17T00:34:54.548405943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-6nfp6,Uid:bef19986-6d7f-4327-9173-74879321bea4,Namespace:calico-apiserver,Attempt:0,}" May 17 00:34:54.589222 containerd[1461]: time="2025-05-17T00:34:54.589179244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-9t9c6,Uid:a5f3228d-9614-4416-8a88-802cb784679f,Namespace:calico-apiserver,Attempt:0,}" May 17 00:34:54.593800 containerd[1461]: time="2025-05-17T00:34:54.593771358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4cdbc55-74hhx,Uid:9537133f-5e07-4b0f-93c4-cc1221685e83,Namespace:calico-system,Attempt:0,}" May 17 00:34:54.946630 systemd[1]: Created slice kubepods-besteffort-podbe42aafd_fcc6_4236_98b3_c64eba42cdf6.slice - libcontainer container kubepods-besteffort-podbe42aafd_fcc6_4236_98b3_c64eba42cdf6.slice. May 17 00:34:54.949073 containerd[1461]: time="2025-05-17T00:34:54.949030925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8pqj,Uid:be42aafd-fcc6-4236-98b3-c64eba42cdf6,Namespace:calico-system,Attempt:0,}" May 17 00:34:55.265412 containerd[1461]: time="2025-05-17T00:34:55.264963925Z" level=error msg="Failed to destroy network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.269999 containerd[1461]: time="2025-05-17T00:34:55.269950089Z" level=error msg="encountered an error cleaning up failed sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.270132 containerd[1461]: time="2025-05-17T00:34:55.270019049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d98bcff46-jh685,Uid:7152e1a8-ee2c-4e70-b6cf-0017356b00dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.276556 containerd[1461]: time="2025-05-17T00:34:55.276404062Z" level=error msg="Failed to destroy network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.280109 containerd[1461]: time="2025-05-17T00:34:55.280058733Z" level=error msg="encountered an error cleaning up failed sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.280220 containerd[1461]: time="2025-05-17T00:34:55.280129836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-9t9c6,Uid:a5f3228d-9614-4416-8a88-802cb784679f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.281132 kubelet[2510]: E0517 00:34:55.281061 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.282984 kubelet[2510]: E0517 00:34:55.281158 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b9c5d64-9t9c6" May 17 00:34:55.282984 kubelet[2510]: E0517 00:34:55.281187 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b9c5d64-9t9c6" May 17 00:34:55.282984 kubelet[2510]: E0517 00:34:55.281242 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698b9c5d64-9t9c6_calico-apiserver(a5f3228d-9614-4416-8a88-802cb784679f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698b9c5d64-9t9c6_calico-apiserver(a5f3228d-9614-4416-8a88-802cb784679f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b9c5d64-9t9c6" podUID="a5f3228d-9614-4416-8a88-802cb784679f" May 17 00:34:55.283511 kubelet[2510]: E0517 00:34:55.281499 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.283511 kubelet[2510]: E0517 00:34:55.281519 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d98bcff46-jh685" May 17 00:34:55.283511 kubelet[2510]: E0517 00:34:55.281555 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d98bcff46-jh685" May 17 00:34:55.283698 kubelet[2510]: E0517 00:34:55.281583 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d98bcff46-jh685_calico-system(7152e1a8-ee2c-4e70-b6cf-0017356b00dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d98bcff46-jh685_calico-system(7152e1a8-ee2c-4e70-b6cf-0017356b00dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d98bcff46-jh685" podUID="7152e1a8-ee2c-4e70-b6cf-0017356b00dc" May 17 00:34:55.284072 containerd[1461]: time="2025-05-17T00:34:55.284002005Z" level=error msg="Failed to destroy network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.286108 containerd[1461]: time="2025-05-17T00:34:55.286083358Z" level=error msg="encountered an error cleaning up failed sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.286332 containerd[1461]: time="2025-05-17T00:34:55.286278285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wd2nk,Uid:b0b2a5ee-c039-427e-9a8b-ca7df66976a4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.286850 kubelet[2510]: E0517 00:34:55.286802 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.286942 kubelet[2510]: E0517 00:34:55.286877 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wd2nk" May 17 00:34:55.286942 kubelet[2510]: E0517 00:34:55.286910 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wd2nk" May 17 00:34:55.287034 kubelet[2510]: E0517 00:34:55.286949 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wd2nk_kube-system(b0b2a5ee-c039-427e-9a8b-ca7df66976a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wd2nk_kube-system(b0b2a5ee-c039-427e-9a8b-ca7df66976a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wd2nk" podUID="b0b2a5ee-c039-427e-9a8b-ca7df66976a4" May 17 00:34:55.288271 containerd[1461]: time="2025-05-17T00:34:55.288226367Z" level=error msg="Failed to destroy network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.289376 containerd[1461]: time="2025-05-17T00:34:55.289283895Z" level=error msg="encountered an error cleaning up failed sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.289376 containerd[1461]: time="2025-05-17T00:34:55.289335402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-6nfp6,Uid:bef19986-6d7f-4327-9173-74879321bea4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.289805 kubelet[2510]: E0517 00:34:55.289673 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.289805 kubelet[2510]: E0517 00:34:55.289721 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b9c5d64-6nfp6" May 17 00:34:55.289805 kubelet[2510]: E0517 00:34:55.289738 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b9c5d64-6nfp6" May 17 00:34:55.289930 kubelet[2510]: E0517 00:34:55.289771 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698b9c5d64-6nfp6_calico-apiserver(bef19986-6d7f-4327-9173-74879321bea4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698b9c5d64-6nfp6_calico-apiserver(bef19986-6d7f-4327-9173-74879321bea4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b9c5d64-6nfp6" podUID="bef19986-6d7f-4327-9173-74879321bea4" May 17 00:34:55.294851 containerd[1461]: time="2025-05-17T00:34:55.294735214Z" level=error msg="Failed to destroy network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.295094 containerd[1461]: time="2025-05-17T00:34:55.295011362Z" level=error msg="Failed to destroy network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.295675 containerd[1461]: time="2025-05-17T00:34:55.295524006Z" level=error msg="encountered an error cleaning up failed sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.295675 containerd[1461]: time="2025-05-17T00:34:55.295591783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vss2s,Uid:1cf92987-bd0b-472f-a9b0-2d45c7497558,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.295675 containerd[1461]: time="2025-05-17T00:34:55.295646676Z" level=error msg="encountered an error cleaning up failed sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.295818 containerd[1461]: time="2025-05-17T00:34:55.295715576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmrv9,Uid:c975f9da-6c98-4900-bbd2-08541503e92e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.296012 kubelet[2510]: E0517 00:34:55.295966 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.296012 kubelet[2510]: E0517 00:34:55.296015 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fmrv9" May 17 00:34:55.296183 kubelet[2510]: E0517 00:34:55.296033 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fmrv9" May 17 00:34:55.296183 kubelet[2510]: E0517 00:34:55.296066 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fmrv9_kube-system(c975f9da-6c98-4900-bbd2-08541503e92e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fmrv9_kube-system(c975f9da-6c98-4900-bbd2-08541503e92e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fmrv9" podUID="c975f9da-6c98-4900-bbd2-08541503e92e" May 17 00:34:55.296183 kubelet[2510]: E0517 00:34:55.296102 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.296309 kubelet[2510]: E0517 00:34:55.296116 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-vss2s" May 17 00:34:55.296309 kubelet[2510]: E0517 00:34:55.296128 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-vss2s" May 17 00:34:55.296309 kubelet[2510]: E0517 00:34:55.296150 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-vss2s_calico-system(1cf92987-bd0b-472f-a9b0-2d45c7497558)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-vss2s_calico-system(1cf92987-bd0b-472f-a9b0-2d45c7497558)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:34:55.310977 containerd[1461]: time="2025-05-17T00:34:55.310916363Z" level=error msg="Failed to destroy network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.311515 containerd[1461]: time="2025-05-17T00:34:55.311469151Z" level=error msg="encountered an error cleaning up failed sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.311588 containerd[1461]: time="2025-05-17T00:34:55.311551366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8pqj,Uid:be42aafd-fcc6-4236-98b3-c64eba42cdf6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.311839 kubelet[2510]: E0517 00:34:55.311797 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.312114 kubelet[2510]: E0517 00:34:55.312024 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8pqj" May 17 00:34:55.312114 kubelet[2510]: E0517 00:34:55.312052 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8pqj" May 17 00:34:55.312688 kubelet[2510]: E0517 00:34:55.312259 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x8pqj_calico-system(be42aafd-fcc6-4236-98b3-c64eba42cdf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x8pqj_calico-system(be42aafd-fcc6-4236-98b3-c64eba42cdf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:55.327474 containerd[1461]: time="2025-05-17T00:34:55.327412093Z" level=error msg="Failed to destroy network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.327936 containerd[1461]: time="2025-05-17T00:34:55.327899529Z" level=error msg="encountered an error cleaning up failed sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.327993 containerd[1461]: time="2025-05-17T00:34:55.327955294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4cdbc55-74hhx,Uid:9537133f-5e07-4b0f-93c4-cc1221685e83,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.328372 kubelet[2510]: E0517 00:34:55.328319 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:55.328439 kubelet[2510]: E0517 00:34:55.328391 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4cdbc55-74hhx" May 17 00:34:55.328439 kubelet[2510]: E0517 00:34:55.328410 2510 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4cdbc55-74hhx" May 17 00:34:55.328500 kubelet[2510]: E0517 00:34:55.328453 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66b4cdbc55-74hhx_calico-system(9537133f-5e07-4b0f-93c4-cc1221685e83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66b4cdbc55-74hhx_calico-system(9537133f-5e07-4b0f-93c4-cc1221685e83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66b4cdbc55-74hhx" podUID="9537133f-5e07-4b0f-93c4-cc1221685e83" May 17 00:34:56.044839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee-shm.mount: Deactivated successfully. May 17 00:34:56.044975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e-shm.mount: Deactivated successfully. May 17 00:34:56.045061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485-shm.mount: Deactivated successfully. May 17 00:34:56.096882 kubelet[2510]: I0517 00:34:56.096820 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:34:56.098320 kubelet[2510]: I0517 00:34:56.098284 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:34:56.099664 kubelet[2510]: I0517 00:34:56.099635 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:34:56.101784 kubelet[2510]: I0517 00:34:56.100904 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:34:56.114720 containerd[1461]: time="2025-05-17T00:34:56.114662766Z" level=info msg="StopPodSandbox for \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\"" May 17 00:34:56.119158 containerd[1461]: time="2025-05-17T00:34:56.119099124Z" level=info msg="StopPodSandbox for \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\"" May 17 00:34:56.120819 containerd[1461]: time="2025-05-17T00:34:56.120772660Z" level=info msg="Ensure that sandbox 4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6 in task-service has been cleanup successfully" May 17 00:34:56.121174 containerd[1461]: time="2025-05-17T00:34:56.121138217Z" level=info msg="Ensure that sandbox ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee in task-service has been cleanup successfully" May 17 00:34:56.125267 containerd[1461]: time="2025-05-17T00:34:56.125228596Z" level=info msg="StopPodSandbox for \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\"" May 17 00:34:56.125402 containerd[1461]: time="2025-05-17T00:34:56.125376774Z" level=info msg="Ensure that sandbox bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a in task-service has been cleanup successfully" May 17 00:34:56.125800 containerd[1461]: time="2025-05-17T00:34:56.125777067Z" level=info msg="StopPodSandbox for \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\"" May 17 00:34:56.126557 kubelet[2510]: I0517 00:34:56.126486 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:34:56.126999 containerd[1461]: time="2025-05-17T00:34:56.126958607Z" level=info msg="StopPodSandbox for \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\"" May 17 00:34:56.127418 containerd[1461]: time="2025-05-17T00:34:56.126999153Z" level=info msg="Ensure that sandbox 5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b in task-service has been cleanup successfully" May 17 00:34:56.127418 containerd[1461]: time="2025-05-17T00:34:56.127406069Z" level=info msg="Ensure that sandbox 0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59 in task-service has been cleanup successfully" May 17 00:34:56.127734 kubelet[2510]: I0517 00:34:56.127709 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:34:56.129997 containerd[1461]: time="2025-05-17T00:34:56.129957825Z" level=info msg="StopPodSandbox for \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\"" May 17 00:34:56.135678 kubelet[2510]: I0517 00:34:56.134517 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:34:56.135742 containerd[1461]: time="2025-05-17T00:34:56.135277965Z" level=info msg="StopPodSandbox for \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\"" May 17 00:34:56.135742 containerd[1461]: time="2025-05-17T00:34:56.135453375Z" level=info msg="Ensure that sandbox 442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e in task-service has been cleanup successfully" May 17 00:34:56.141365 kubelet[2510]: I0517 00:34:56.140949 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:34:56.148789 containerd[1461]: time="2025-05-17T00:34:56.148689734Z" level=info msg="StopPodSandbox for \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\"" May 17 00:34:56.151094 containerd[1461]: time="2025-05-17T00:34:56.150817043Z" level=info msg="Ensure that sandbox 0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539 in task-service has been cleanup successfully" May 17 00:34:56.174771 containerd[1461]: time="2025-05-17T00:34:56.174708160Z" level=error msg="StopPodSandbox for \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\" failed" error="failed to destroy network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.175001 kubelet[2510]: E0517 00:34:56.174962 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:34:56.175075 kubelet[2510]: E0517 00:34:56.175031 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6"} May 17 00:34:56.175112 kubelet[2510]: E0517 00:34:56.175088 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9537133f-5e07-4b0f-93c4-cc1221685e83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.175193 kubelet[2510]: E0517 00:34:56.175118 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9537133f-5e07-4b0f-93c4-cc1221685e83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66b4cdbc55-74hhx" podUID="9537133f-5e07-4b0f-93c4-cc1221685e83" May 17 00:34:56.182222 containerd[1461]: time="2025-05-17T00:34:56.182188580Z" level=error msg="StopPodSandbox for \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\" failed" error="failed to destroy network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.182579 kubelet[2510]: E0517 00:34:56.182388 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:34:56.182579 kubelet[2510]: E0517 00:34:56.182448 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b"} May 17 00:34:56.182579 kubelet[2510]: E0517 00:34:56.182482 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be42aafd-fcc6-4236-98b3-c64eba42cdf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.182579 kubelet[2510]: E0517 00:34:56.182506 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be42aafd-fcc6-4236-98b3-c64eba42cdf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x8pqj" podUID="be42aafd-fcc6-4236-98b3-c64eba42cdf6" May 17 00:34:56.187411 containerd[1461]: time="2025-05-17T00:34:56.187103990Z" level=info msg="Ensure that sandbox 532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485 in task-service has been cleanup successfully" May 17 00:34:56.192946 containerd[1461]: time="2025-05-17T00:34:56.192900625Z" level=error msg="StopPodSandbox for \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\" failed" error="failed to destroy network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.193436 kubelet[2510]: E0517 00:34:56.193286 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:34:56.193436 kubelet[2510]: E0517 00:34:56.193342 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e"} May 17 00:34:56.193436 kubelet[2510]: E0517 00:34:56.193377 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c975f9da-6c98-4900-bbd2-08541503e92e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.193436 kubelet[2510]: E0517 00:34:56.193403 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c975f9da-6c98-4900-bbd2-08541503e92e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fmrv9" podUID="c975f9da-6c98-4900-bbd2-08541503e92e" May 17 00:34:56.200226 containerd[1461]: time="2025-05-17T00:34:56.200164088Z" level=error msg="StopPodSandbox for \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\" failed" error="failed to destroy network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.200507 kubelet[2510]: E0517 00:34:56.200453 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:34:56.200576 kubelet[2510]: E0517 00:34:56.200514 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee"} May 17 00:34:56.200576 kubelet[2510]: E0517 00:34:56.200562 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1cf92987-bd0b-472f-a9b0-2d45c7497558\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.200662 kubelet[2510]: E0517 00:34:56.200584 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1cf92987-bd0b-472f-a9b0-2d45c7497558\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:34:56.203100 containerd[1461]: time="2025-05-17T00:34:56.203049852Z" level=error msg="StopPodSandbox for \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\" failed" error="failed to destroy network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.203456 kubelet[2510]: E0517 00:34:56.203427 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:34:56.203515 kubelet[2510]: E0517 00:34:56.203458 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59"} May 17 00:34:56.203515 kubelet[2510]: E0517 00:34:56.203489 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bef19986-6d7f-4327-9173-74879321bea4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.203630 kubelet[2510]: E0517 00:34:56.203510 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bef19986-6d7f-4327-9173-74879321bea4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b9c5d64-6nfp6" podUID="bef19986-6d7f-4327-9173-74879321bea4" May 17 00:34:56.204243 containerd[1461]: time="2025-05-17T00:34:56.204210805Z" level=error msg="StopPodSandbox for \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\" failed" error="failed to destroy network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.204416 kubelet[2510]: E0517 00:34:56.204382 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:34:56.204450 kubelet[2510]: E0517 00:34:56.204436 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a"} May 17 00:34:56.204479 kubelet[2510]: E0517 00:34:56.204463 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.204547 kubelet[2510]: E0517 00:34:56.204485 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d98bcff46-jh685" podUID="7152e1a8-ee2c-4e70-b6cf-0017356b00dc" May 17 00:34:56.227245 containerd[1461]: time="2025-05-17T00:34:56.227183085Z" level=error msg="StopPodSandbox for \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\" failed" error="failed to destroy network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.227759 kubelet[2510]: E0517 00:34:56.227643 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:34:56.227759 kubelet[2510]: E0517 00:34:56.227703 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485"} May 17 00:34:56.227759 kubelet[2510]: E0517 00:34:56.227746 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0b2a5ee-c039-427e-9a8b-ca7df66976a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.228063 kubelet[2510]: E0517 00:34:56.227780 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0b2a5ee-c039-427e-9a8b-ca7df66976a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wd2nk" podUID="b0b2a5ee-c039-427e-9a8b-ca7df66976a4" May 17 00:34:56.229583 containerd[1461]: time="2025-05-17T00:34:56.229519486Z" level=error msg="StopPodSandbox for \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\" failed" error="failed to destroy network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:34:56.229836 kubelet[2510]: E0517 00:34:56.229774 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:34:56.229911 kubelet[2510]: E0517 00:34:56.229848 2510 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539"} May 17 00:34:56.229947 kubelet[2510]: E0517 00:34:56.229926 2510 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5f3228d-9614-4416-8a88-802cb784679f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:34:56.229997 kubelet[2510]: E0517 00:34:56.229955 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5f3228d-9614-4416-8a88-802cb784679f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b9c5d64-9t9c6" podUID="a5f3228d-9614-4416-8a88-802cb784679f" May 17 00:35:02.397166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169747890.mount: Deactivated successfully. May 17 00:35:05.214002 containerd[1461]: time="2025-05-17T00:35:05.208347496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:05.242796 containerd[1461]: time="2025-05-17T00:35:05.242699106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:35:05.278713 containerd[1461]: time="2025-05-17T00:35:05.278637085Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:05.340561 containerd[1461]: time="2025-05-17T00:35:05.340479860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:05.341292 containerd[1461]: time="2025-05-17T00:35:05.341257740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 11.249325746s" May 17 00:35:05.341337 containerd[1461]: time="2025-05-17T00:35:05.341312233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:35:05.353490 containerd[1461]: time="2025-05-17T00:35:05.353422516Z" level=info msg="CreateContainer within sandbox \"85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:35:05.687526 containerd[1461]: time="2025-05-17T00:35:05.687434562Z" level=info msg="CreateContainer within sandbox \"85470d18262f45ee3fcc446786f2781136d0f6e0ad104a441a0572f45910dfa2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a1a07ed350bbdff7f8ca395a6397ee75c12eea9a22b6d2af0dcf215fb5f008d6\"" May 17 00:35:05.688598 containerd[1461]: time="2025-05-17T00:35:05.688551710Z" level=info msg="StartContainer for \"a1a07ed350bbdff7f8ca395a6397ee75c12eea9a22b6d2af0dcf215fb5f008d6\"" May 17 00:35:05.770803 systemd[1]: Started cri-containerd-a1a07ed350bbdff7f8ca395a6397ee75c12eea9a22b6d2af0dcf215fb5f008d6.scope - libcontainer container a1a07ed350bbdff7f8ca395a6397ee75c12eea9a22b6d2af0dcf215fb5f008d6. May 17 00:35:06.378142 containerd[1461]: time="2025-05-17T00:35:06.378083562Z" level=info msg="StartContainer for \"a1a07ed350bbdff7f8ca395a6397ee75c12eea9a22b6d2af0dcf215fb5f008d6\" returns successfully" May 17 00:35:06.413617 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:35:06.415366 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:35:06.556336 kubelet[2510]: I0517 00:35:06.555502 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2jsz6" podStartSLOduration=2.294919181 podStartE2EDuration="25.555480284s" podCreationTimestamp="2025-05-17 00:34:41 +0000 UTC" firstStartedPulling="2025-05-17 00:34:42.083148632 +0000 UTC m=+21.222156915" lastFinishedPulling="2025-05-17 00:35:05.343709735 +0000 UTC m=+44.482718018" observedRunningTime="2025-05-17 00:35:06.411124806 +0000 UTC m=+45.550133089" watchObservedRunningTime="2025-05-17 00:35:06.555480284 +0000 UTC m=+45.694488567" May 17 00:35:06.591719 containerd[1461]: time="2025-05-17T00:35:06.557929603Z" level=info msg="StopPodSandbox for \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\"" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.688 [INFO][3864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.688 [INFO][3864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" iface="eth0" netns="/var/run/netns/cni-243d8579-6f8e-437f-7eac-d308614c96a0" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.688 [INFO][3864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" iface="eth0" netns="/var/run/netns/cni-243d8579-6f8e-437f-7eac-d308614c96a0" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.689 [INFO][3864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" iface="eth0" netns="/var/run/netns/cni-243d8579-6f8e-437f-7eac-d308614c96a0" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.689 [INFO][3864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.689 [INFO][3864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.779 [INFO][3883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.780 [INFO][3883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.780 [INFO][3883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.789 [WARNING][3883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.789 [INFO][3883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.818 [INFO][3883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:06.826422 containerd[1461]: 2025-05-17 00:35:06.822 [INFO][3864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:06.827258 containerd[1461]: time="2025-05-17T00:35:06.826905510Z" level=info msg="TearDown network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\" successfully" May 17 00:35:06.827258 containerd[1461]: time="2025-05-17T00:35:06.826937940Z" level=info msg="StopPodSandbox for \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\" returns successfully" May 17 00:35:06.830130 systemd[1]: run-netns-cni\x2d243d8579\x2d6f8e\x2d437f\x2d7eac\x2dd308614c96a0.mount: Deactivated successfully. May 17 00:35:06.879976 kubelet[2510]: I0517 00:35:06.879904 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-ca-bundle\") pod \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\" (UID: \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\") " May 17 00:35:06.879976 kubelet[2510]: I0517 00:35:06.879980 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-backend-key-pair\") pod \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\" (UID: \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\") " May 17 00:35:06.880225 kubelet[2510]: I0517 00:35:06.880009 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf9rq\" (UniqueName: \"kubernetes.io/projected/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-kube-api-access-lf9rq\") pod \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\" (UID: \"7152e1a8-ee2c-4e70-b6cf-0017356b00dc\") " May 17 00:35:06.880692 kubelet[2510]: I0517 00:35:06.880562 2510 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7152e1a8-ee2c-4e70-b6cf-0017356b00dc" (UID: "7152e1a8-ee2c-4e70-b6cf-0017356b00dc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:35:06.885088 kubelet[2510]: I0517 00:35:06.885025 2510 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7152e1a8-ee2c-4e70-b6cf-0017356b00dc" (UID: "7152e1a8-ee2c-4e70-b6cf-0017356b00dc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:35:06.887478 systemd[1]: var-lib-kubelet-pods-7152e1a8\x2dee2c\x2d4e70\x2db6cf\x2d0017356b00dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlf9rq.mount: Deactivated successfully. May 17 00:35:06.887742 kubelet[2510]: I0517 00:35:06.887684 2510 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-kube-api-access-lf9rq" (OuterVolumeSpecName: "kube-api-access-lf9rq") pod "7152e1a8-ee2c-4e70-b6cf-0017356b00dc" (UID: "7152e1a8-ee2c-4e70-b6cf-0017356b00dc"). InnerVolumeSpecName "kube-api-access-lf9rq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:35:06.887826 systemd[1]: var-lib-kubelet-pods-7152e1a8\x2dee2c\x2d4e70\x2db6cf\x2d0017356b00dc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:35:06.948037 systemd[1]: Removed slice kubepods-besteffort-pod7152e1a8_ee2c_4e70_b6cf_0017356b00dc.slice - libcontainer container kubepods-besteffort-pod7152e1a8_ee2c_4e70_b6cf_0017356b00dc.slice. May 17 00:35:06.981206 kubelet[2510]: I0517 00:35:06.981148 2510 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 17 00:35:06.981206 kubelet[2510]: I0517 00:35:06.981197 2510 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 17 00:35:06.981206 kubelet[2510]: I0517 00:35:06.981209 2510 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lf9rq\" (UniqueName: \"kubernetes.io/projected/7152e1a8-ee2c-4e70-b6cf-0017356b00dc-kube-api-access-lf9rq\") on node \"localhost\" DevicePath \"\"" May 17 00:35:07.466266 systemd[1]: Created slice kubepods-besteffort-podabd949cd_2e01_4075_875a_35887707269d.slice - libcontainer container kubepods-besteffort-podabd949cd_2e01_4075_875a_35887707269d.slice. May 17 00:35:07.485420 kubelet[2510]: I0517 00:35:07.485247 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd949cd-2e01-4075-875a-35887707269d-whisker-ca-bundle\") pod \"whisker-64b597d656-vs877\" (UID: \"abd949cd-2e01-4075-875a-35887707269d\") " pod="calico-system/whisker-64b597d656-vs877" May 17 00:35:07.485420 kubelet[2510]: I0517 00:35:07.485313 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abd949cd-2e01-4075-875a-35887707269d-whisker-backend-key-pair\") pod \"whisker-64b597d656-vs877\" (UID: \"abd949cd-2e01-4075-875a-35887707269d\") " pod="calico-system/whisker-64b597d656-vs877" May 17 00:35:07.485420 kubelet[2510]: I0517 00:35:07.485367 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bws2d\" (UniqueName: \"kubernetes.io/projected/abd949cd-2e01-4075-875a-35887707269d-kube-api-access-bws2d\") pod \"whisker-64b597d656-vs877\" (UID: \"abd949cd-2e01-4075-875a-35887707269d\") " pod="calico-system/whisker-64b597d656-vs877" May 17 00:35:07.770718 containerd[1461]: time="2025-05-17T00:35:07.770418048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b597d656-vs877,Uid:abd949cd-2e01-4075-875a-35887707269d,Namespace:calico-system,Attempt:0,}" May 17 00:35:07.939577 containerd[1461]: time="2025-05-17T00:35:07.937896103Z" level=info msg="StopPodSandbox for \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\"" May 17 00:35:07.969397 systemd-networkd[1401]: cali830a7bffb4d: Link UP May 17 00:35:07.970285 systemd-networkd[1401]: cali830a7bffb4d: Gained carrier May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.832 [INFO][3930] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.848 [INFO][3930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64b597d656--vs877-eth0 whisker-64b597d656- calico-system abd949cd-2e01-4075-875a-35887707269d 927 0 2025-05-17 00:35:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64b597d656 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64b597d656-vs877 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali830a7bffb4d [] [] }} ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.848 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-eth0" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.879 [INFO][3944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" HandleID="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Workload="localhost-k8s-whisker--64b597d656--vs877-eth0" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.880 [INFO][3944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" HandleID="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Workload="localhost-k8s-whisker--64b597d656--vs877-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64b597d656-vs877", "timestamp":"2025-05-17 00:35:07.879675195 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.880 [INFO][3944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.880 [INFO][3944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.880 [INFO][3944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.888 [INFO][3944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.899 [INFO][3944] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.910 [INFO][3944] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.916 [INFO][3944] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.920 [INFO][3944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.920 [INFO][3944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.923 [INFO][3944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241 May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.936 [INFO][3944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.948 [INFO][3944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.948 [INFO][3944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" host="localhost" May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.948 [INFO][3944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:07.986202 containerd[1461]: 2025-05-17 00:35:07.948 [INFO][3944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" HandleID="k8s-pod-network.e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Workload="localhost-k8s-whisker--64b597d656--vs877-eth0" May 17 00:35:07.987017 containerd[1461]: 2025-05-17 00:35:07.957 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64b597d656--vs877-eth0", GenerateName:"whisker-64b597d656-", Namespace:"calico-system", SelfLink:"", UID:"abd949cd-2e01-4075-875a-35887707269d", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 35, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64b597d656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64b597d656-vs877", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali830a7bffb4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:07.987017 containerd[1461]: 2025-05-17 00:35:07.957 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-eth0" May 17 00:35:07.987017 containerd[1461]: 2025-05-17 00:35:07.957 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali830a7bffb4d ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-eth0" May 17 00:35:07.987017 containerd[1461]: 2025-05-17 00:35:07.969 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-eth0" May 17 00:35:07.987017 containerd[1461]: 2025-05-17 00:35:07.969 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64b597d656--vs877-eth0", GenerateName:"whisker-64b597d656-", Namespace:"calico-system", SelfLink:"", UID:"abd949cd-2e01-4075-875a-35887707269d", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 35, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64b597d656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241", Pod:"whisker-64b597d656-vs877", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali830a7bffb4d", MAC:"da:4a:83:a4:be:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:07.987017 containerd[1461]: 2025-05-17 00:35:07.982 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241" Namespace="calico-system" Pod="whisker-64b597d656-vs877" WorkloadEndpoint="localhost-k8s-whisker--64b597d656--vs877-eth0" May 17 00:35:08.021682 containerd[1461]: time="2025-05-17T00:35:08.020853857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:08.021682 containerd[1461]: time="2025-05-17T00:35:08.020974413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:08.021682 containerd[1461]: time="2025-05-17T00:35:08.020991205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:08.021682 containerd[1461]: time="2025-05-17T00:35:08.021135365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:08.045942 systemd[1]: Started cri-containerd-e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241.scope - libcontainer container e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241. May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:07.997 [INFO][3963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:07.998 [INFO][3963] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" iface="eth0" netns="/var/run/netns/cni-56ef7575-9d24-1fed-ba66-20d5f54b0adb" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:07.998 [INFO][3963] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" iface="eth0" netns="/var/run/netns/cni-56ef7575-9d24-1fed-ba66-20d5f54b0adb" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:07.998 [INFO][3963] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" iface="eth0" netns="/var/run/netns/cni-56ef7575-9d24-1fed-ba66-20d5f54b0adb" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:07.998 [INFO][3963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:07.998 [INFO][3963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:08.028 [INFO][3980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:08.028 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:08.028 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:08.036 [WARNING][3980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:08.036 [INFO][3980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:08.039 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:08.048734 containerd[1461]: 2025-05-17 00:35:08.043 [INFO][3963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:08.049408 containerd[1461]: time="2025-05-17T00:35:08.048959556Z" level=info msg="TearDown network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\" successfully" May 17 00:35:08.049408 containerd[1461]: time="2025-05-17T00:35:08.048990444Z" level=info msg="StopPodSandbox for \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\" returns successfully" May 17 00:35:08.049498 kubelet[2510]: E0517 00:35:08.049322 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:08.050984 containerd[1461]: time="2025-05-17T00:35:08.050228639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wd2nk,Uid:b0b2a5ee-c039-427e-9a8b-ca7df66976a4,Namespace:kube-system,Attempt:1,}" May 17 00:35:08.082136 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:08.116101 containerd[1461]: time="2025-05-17T00:35:08.114614335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b597d656-vs877,Uid:abd949cd-2e01-4075-875a-35887707269d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6024e153c8bf0d1157583fce4de834ad9a51f4935fc9e45f1e6900e9fea7241\"" May 17 00:35:08.118473 containerd[1461]: time="2025-05-17T00:35:08.118368173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:35:08.251993 systemd-networkd[1401]: cali6244fca7363: Link UP May 17 00:35:08.253816 systemd-networkd[1401]: cali6244fca7363: Gained carrier May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.153 [INFO][4027] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.167 [INFO][4027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0 coredns-668d6bf9bc- kube-system b0b2a5ee-c039-427e-9a8b-ca7df66976a4 935 0 2025-05-17 00:34:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wd2nk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6244fca7363 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.167 [INFO][4027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.199 [INFO][4042] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" HandleID="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.199 [INFO][4042] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" HandleID="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033a9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wd2nk", "timestamp":"2025-05-17 00:35:08.199469001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.199 [INFO][4042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.199 [INFO][4042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.199 [INFO][4042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.207 [INFO][4042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.214 [INFO][4042] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.218 [INFO][4042] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.221 [INFO][4042] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.223 [INFO][4042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.223 [INFO][4042] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.226 [INFO][4042] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.231 [INFO][4042] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.243 [INFO][4042] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.243 [INFO][4042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" host="localhost" May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.243 [INFO][4042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:08.283146 containerd[1461]: 2025-05-17 00:35:08.243 [INFO][4042] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" HandleID="k8s-pod-network.5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.284129 containerd[1461]: 2025-05-17 00:35:08.248 [INFO][4027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0b2a5ee-c039-427e-9a8b-ca7df66976a4", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wd2nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6244fca7363", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:08.284129 containerd[1461]: 2025-05-17 00:35:08.248 [INFO][4027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.284129 containerd[1461]: 2025-05-17 00:35:08.248 [INFO][4027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6244fca7363 ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.284129 containerd[1461]: 2025-05-17 00:35:08.254 [INFO][4027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.284129 containerd[1461]: 2025-05-17 00:35:08.255 [INFO][4027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0b2a5ee-c039-427e-9a8b-ca7df66976a4", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd", Pod:"coredns-668d6bf9bc-wd2nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6244fca7363", MAC:"0e:f0:e4:f8:73:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:08.284129 containerd[1461]: 2025-05-17 00:35:08.274 [INFO][4027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wd2nk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:08.320329 containerd[1461]: time="2025-05-17T00:35:08.319933347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:08.320329 containerd[1461]: time="2025-05-17T00:35:08.320006375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:08.320329 containerd[1461]: time="2025-05-17T00:35:08.320029147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:08.320329 containerd[1461]: time="2025-05-17T00:35:08.320133854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:08.348790 systemd[1]: Started cri-containerd-5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd.scope - libcontainer container 5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd. May 17 00:35:08.367760 containerd[1461]: time="2025-05-17T00:35:08.367655286Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:08.368239 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:08.412506 systemd[1]: run-netns-cni\x2d56ef7575\x2d9d24\x2d1fed\x2dba66\x2d20d5f54b0adb.mount: Deactivated successfully. May 17 00:35:08.442426 containerd[1461]: time="2025-05-17T00:35:08.442343135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wd2nk,Uid:b0b2a5ee-c039-427e-9a8b-ca7df66976a4,Namespace:kube-system,Attempt:1,} returns sandbox id \"5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd\"" May 17 00:35:08.445141 kubelet[2510]: E0517 00:35:08.444127 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:08.448967 containerd[1461]: time="2025-05-17T00:35:08.448910486Z" level=info msg="CreateContainer within sandbox \"5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:35:08.464859 containerd[1461]: time="2025-05-17T00:35:08.464525072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:08.464859 containerd[1461]: time="2025-05-17T00:35:08.464672408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:35:08.467871 kubelet[2510]: E0517 00:35:08.467818 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:08.467871 kubelet[2510]: E0517 00:35:08.467883 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:08.470061 kubelet[2510]: E0517 00:35:08.469967 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7aa13eb9e554e8d87b7837efa2e20d7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b597d656-vs877_calico-system(abd949cd-2e01-4075-875a-35887707269d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:08.472941 containerd[1461]: time="2025-05-17T00:35:08.472880640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:35:08.611886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586322339.mount: Deactivated successfully. May 17 00:35:08.619143 containerd[1461]: time="2025-05-17T00:35:08.618953566Z" level=info msg="CreateContainer within sandbox \"5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13c301a25cc01445b9a054ac2211edef2909d826599657f1a7f9386a30df6d59\"" May 17 00:35:08.619922 containerd[1461]: time="2025-05-17T00:35:08.619849698Z" level=info msg="StartContainer for \"13c301a25cc01445b9a054ac2211edef2909d826599657f1a7f9386a30df6d59\"" May 17 00:35:08.661807 systemd[1]: Started cri-containerd-13c301a25cc01445b9a054ac2211edef2909d826599657f1a7f9386a30df6d59.scope - libcontainer container 13c301a25cc01445b9a054ac2211edef2909d826599657f1a7f9386a30df6d59. May 17 00:35:08.700873 containerd[1461]: time="2025-05-17T00:35:08.700815825Z" level=info msg="StartContainer for \"13c301a25cc01445b9a054ac2211edef2909d826599657f1a7f9386a30df6d59\" returns successfully" May 17 00:35:08.796457 containerd[1461]: time="2025-05-17T00:35:08.796399055Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:08.856348 containerd[1461]: time="2025-05-17T00:35:08.856283129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:08.856605 containerd[1461]: time="2025-05-17T00:35:08.856402763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:35:08.856676 kubelet[2510]: E0517 00:35:08.856627 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:08.856794 kubelet[2510]: E0517 00:35:08.856680 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:08.856912 kubelet[2510]: E0517 00:35:08.856798 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b597d656-vs877_calico-system(abd949cd-2e01-4075-875a-35887707269d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:08.858031 kubelet[2510]: E0517 00:35:08.857967 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-64b597d656-vs877" podUID="abd949cd-2e01-4075-875a-35887707269d" May 17 00:35:08.938634 containerd[1461]: time="2025-05-17T00:35:08.938463785Z" level=info msg="StopPodSandbox for \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\"" May 17 00:35:08.939569 containerd[1461]: time="2025-05-17T00:35:08.939062619Z" level=info msg="StopPodSandbox for \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\"" May 17 00:35:08.939569 containerd[1461]: time="2025-05-17T00:35:08.939324231Z" level=info msg="StopPodSandbox for \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\"" May 17 00:35:08.940873 kubelet[2510]: I0517 00:35:08.940819 2510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7152e1a8-ee2c-4e70-b6cf-0017356b00dc" path="/var/lib/kubelet/pods/7152e1a8-ee2c-4e70-b6cf-0017356b00dc/volumes" May 17 00:35:09.137649 systemd-networkd[1401]: cali830a7bffb4d: Gained IPv6LL May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.130 [INFO][4275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4275] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" iface="eth0" netns="/var/run/netns/cni-15c2c981-2af0-a4c1-9e5e-2e2fe5e4297d" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4275] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" iface="eth0" netns="/var/run/netns/cni-15c2c981-2af0-a4c1-9e5e-2e2fe5e4297d" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4275] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" iface="eth0" netns="/var/run/netns/cni-15c2c981-2af0-a4c1-9e5e-2e2fe5e4297d" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.162 [INFO][4295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.162 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.162 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.174 [WARNING][4295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.174 [INFO][4295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.175 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:09.182836 containerd[1461]: 2025-05-17 00:35:09.177 [INFO][4275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:09.183585 containerd[1461]: time="2025-05-17T00:35:09.183405490Z" level=info msg="TearDown network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\" successfully" May 17 00:35:09.183585 containerd[1461]: time="2025-05-17T00:35:09.183441076Z" level=info msg="StopPodSandbox for \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\" returns successfully" May 17 00:35:09.184288 containerd[1461]: time="2025-05-17T00:35:09.184255887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4cdbc55-74hhx,Uid:9537133f-5e07-4b0f-93c4-cc1221685e83,Namespace:calico-system,Attempt:1,}" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.130 [INFO][4261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.130 [INFO][4261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" iface="eth0" netns="/var/run/netns/cni-8e4db5f2-bfe6-8e9a-e7ea-9469f3cd6dd9" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.130 [INFO][4261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" iface="eth0" netns="/var/run/netns/cni-8e4db5f2-bfe6-8e9a-e7ea-9469f3cd6dd9" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.132 [INFO][4261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" iface="eth0" netns="/var/run/netns/cni-8e4db5f2-bfe6-8e9a-e7ea-9469f3cd6dd9" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.132 [INFO][4261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.132 [INFO][4261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.163 [INFO][4297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.163 [INFO][4297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.175 [INFO][4297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.183 [WARNING][4297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.183 [INFO][4297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.184 [INFO][4297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:09.189968 containerd[1461]: 2025-05-17 00:35:09.187 [INFO][4261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:09.190550 containerd[1461]: time="2025-05-17T00:35:09.190113143Z" level=info msg="TearDown network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\" successfully" May 17 00:35:09.190550 containerd[1461]: time="2025-05-17T00:35:09.190146416Z" level=info msg="StopPodSandbox for \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\" returns successfully" May 17 00:35:09.191071 containerd[1461]: time="2025-05-17T00:35:09.191034893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-6nfp6,Uid:bef19986-6d7f-4327-9173-74879321bea4,Namespace:calico-apiserver,Attempt:1,}" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.129 [INFO][4276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.130 [INFO][4276] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" iface="eth0" netns="/var/run/netns/cni-20742712-fa1c-1f63-d2fa-b0f140aa7e88" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.130 [INFO][4276] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" iface="eth0" netns="/var/run/netns/cni-20742712-fa1c-1f63-d2fa-b0f140aa7e88" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4276] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" iface="eth0" netns="/var/run/netns/cni-20742712-fa1c-1f63-d2fa-b0f140aa7e88" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.131 [INFO][4276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.177 [INFO][4294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.177 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.184 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.189 [WARNING][4294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.189 [INFO][4294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.190 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:09.196204 containerd[1461]: 2025-05-17 00:35:09.193 [INFO][4276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:09.196566 containerd[1461]: time="2025-05-17T00:35:09.196331287Z" level=info msg="TearDown network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\" successfully" May 17 00:35:09.196566 containerd[1461]: time="2025-05-17T00:35:09.196352797Z" level=info msg="StopPodSandbox for \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\" returns successfully" May 17 00:35:09.196907 containerd[1461]: time="2025-05-17T00:35:09.196887401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vss2s,Uid:1cf92987-bd0b-472f-a9b0-2d45c7497558,Namespace:calico-system,Attempt:1,}" May 17 00:35:09.376893 systemd-networkd[1401]: cali2a4a00c2d3f: Link UP May 17 00:35:09.377627 systemd-networkd[1401]: cali2a4a00c2d3f: Gained carrier May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.287 [INFO][4317] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.303 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0 goldmane-78d55f7ddc- calico-system 1cf92987-bd0b-472f-a9b0-2d45c7497558 952 0 2025-05-17 00:34:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-vss2s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2a4a00c2d3f [] [] }} ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.303 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.334 [INFO][4357] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" HandleID="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.334 [INFO][4357] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" HandleID="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-vss2s", "timestamp":"2025-05-17 00:35:09.334483376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.334 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.335 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.335 [INFO][4357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.345 [INFO][4357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.350 [INFO][4357] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.354 [INFO][4357] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.356 [INFO][4357] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.358 [INFO][4357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.358 [INFO][4357] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.359 [INFO][4357] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6 May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.363 [INFO][4357] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.368 [INFO][4357] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.368 [INFO][4357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" host="localhost" May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.368 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:09.391280 containerd[1461]: 2025-05-17 00:35:09.368 [INFO][4357] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" HandleID="k8s-pod-network.7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.391879 containerd[1461]: 2025-05-17 00:35:09.373 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1cf92987-bd0b-472f-a9b0-2d45c7497558", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-vss2s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a4a00c2d3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:09.391879 containerd[1461]: 2025-05-17 00:35:09.374 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.391879 containerd[1461]: 2025-05-17 00:35:09.374 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a4a00c2d3f ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.391879 containerd[1461]: 2025-05-17 00:35:09.377 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.391879 containerd[1461]: 2025-05-17 00:35:09.377 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1cf92987-bd0b-472f-a9b0-2d45c7497558", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6", Pod:"goldmane-78d55f7ddc-vss2s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a4a00c2d3f", MAC:"22:30:3e:3f:27:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:09.391879 containerd[1461]: 2025-05-17 00:35:09.388 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-vss2s" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:09.398695 kubelet[2510]: E0517 00:35:09.398442 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:09.399862 kubelet[2510]: E0517 00:35:09.399817 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-64b597d656-vs877" podUID="abd949cd-2e01-4075-875a-35887707269d" May 17 00:35:09.411634 systemd[1]: run-netns-cni\x2d15c2c981\x2d2af0\x2da4c1\x2d9e5e\x2d2e2fe5e4297d.mount: Deactivated successfully. May 17 00:35:09.412115 systemd[1]: run-netns-cni\x2d8e4db5f2\x2dbfe6\x2d8e9a\x2de7ea\x2d9469f3cd6dd9.mount: Deactivated successfully. May 17 00:35:09.412211 systemd[1]: run-netns-cni\x2d20742712\x2dfa1c\x2d1f63\x2dd2fa\x2db0f140aa7e88.mount: Deactivated successfully. May 17 00:35:09.424558 containerd[1461]: time="2025-05-17T00:35:09.424232523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:09.424558 containerd[1461]: time="2025-05-17T00:35:09.424385610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:09.424558 containerd[1461]: time="2025-05-17T00:35:09.424403193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:09.424558 containerd[1461]: time="2025-05-17T00:35:09.424494404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:09.433225 kubelet[2510]: I0517 00:35:09.433025 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wd2nk" podStartSLOduration=43.433005343 podStartE2EDuration="43.433005343s" podCreationTimestamp="2025-05-17 00:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:09.411191629 +0000 UTC m=+48.550199912" watchObservedRunningTime="2025-05-17 00:35:09.433005343 +0000 UTC m=+48.572013626" May 17 00:35:09.456810 systemd[1]: Started cri-containerd-7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6.scope - libcontainer container 7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6. May 17 00:35:09.475202 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:09.492651 systemd-networkd[1401]: cali546dfe2574f: Link UP May 17 00:35:09.494278 systemd-networkd[1401]: cali546dfe2574f: Gained carrier May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.301 [INFO][4323] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.311 [INFO][4323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0 calico-apiserver-698b9c5d64- calico-apiserver bef19986-6d7f-4327-9173-74879321bea4 954 0 2025-05-17 00:34:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698b9c5d64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-698b9c5d64-6nfp6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali546dfe2574f [] [] }} ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.311 [INFO][4323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.350 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" HandleID="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.350 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" HandleID="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-698b9c5d64-6nfp6", "timestamp":"2025-05-17 00:35:09.350124931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.350 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.369 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.369 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.450 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.460 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.468 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.470 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.473 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.473 [INFO][4366] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.475 [INFO][4366] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986 May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.480 [INFO][4366] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.485 [INFO][4366] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.485 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" host="localhost" May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.485 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:09.513624 containerd[1461]: 2025-05-17 00:35:09.485 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" HandleID="k8s-pod-network.25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.514389 containerd[1461]: 2025-05-17 00:35:09.489 [INFO][4323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"bef19986-6d7f-4327-9173-74879321bea4", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-698b9c5d64-6nfp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali546dfe2574f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:09.514389 containerd[1461]: 2025-05-17 00:35:09.489 [INFO][4323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.514389 containerd[1461]: 2025-05-17 00:35:09.489 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali546dfe2574f ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.514389 containerd[1461]: 2025-05-17 00:35:09.497 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.514389 containerd[1461]: 2025-05-17 00:35:09.498 [INFO][4323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"bef19986-6d7f-4327-9173-74879321bea4", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986", Pod:"calico-apiserver-698b9c5d64-6nfp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali546dfe2574f", MAC:"7a:76:4b:cd:a7:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:09.514389 containerd[1461]: 2025-05-17 00:35:09.508 [INFO][4323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-6nfp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:09.525951 containerd[1461]: time="2025-05-17T00:35:09.525894735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-vss2s,Uid:1cf92987-bd0b-472f-a9b0-2d45c7497558,Namespace:calico-system,Attempt:1,} returns sandbox id \"7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6\"" May 17 00:35:09.527878 containerd[1461]: time="2025-05-17T00:35:09.527845767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:35:09.537686 containerd[1461]: time="2025-05-17T00:35:09.537324533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:09.537686 containerd[1461]: time="2025-05-17T00:35:09.537387912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:09.537686 containerd[1461]: time="2025-05-17T00:35:09.537411336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:09.539873 containerd[1461]: time="2025-05-17T00:35:09.538891575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:09.564765 systemd[1]: Started cri-containerd-25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986.scope - libcontainer container 25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986. May 17 00:35:09.580665 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:09.605807 containerd[1461]: time="2025-05-17T00:35:09.605763762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-6nfp6,Uid:bef19986-6d7f-4327-9173-74879321bea4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986\"" May 17 00:35:09.756178 containerd[1461]: time="2025-05-17T00:35:09.756009265Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:09.764388 systemd-networkd[1401]: calid461390d1e3: Link UP May 17 00:35:09.764631 systemd-networkd[1401]: calid461390d1e3: Gained carrier May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.315 [INFO][4340] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.327 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0 calico-kube-controllers-66b4cdbc55- calico-system 9537133f-5e07-4b0f-93c4-cc1221685e83 953 0 2025-05-17 00:34:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66b4cdbc55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66b4cdbc55-74hhx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid461390d1e3 [] [] }} ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.327 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.359 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" HandleID="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.359 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" HandleID="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66b4cdbc55-74hhx", "timestamp":"2025-05-17 00:35:09.359419351 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.359 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.485 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.485 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.547 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.560 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.567 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.569 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.571 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.571 [INFO][4373] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.572 [INFO][4373] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64 May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.614 [INFO][4373] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.756 [INFO][4373] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.756 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" host="localhost" May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.756 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:09.825245 containerd[1461]: 2025-05-17 00:35:09.756 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" HandleID="k8s-pod-network.c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.828233 containerd[1461]: 2025-05-17 00:35:09.761 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0", GenerateName:"calico-kube-controllers-66b4cdbc55-", Namespace:"calico-system", SelfLink:"", UID:"9537133f-5e07-4b0f-93c4-cc1221685e83", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66b4cdbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66b4cdbc55-74hhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid461390d1e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:09.828233 containerd[1461]: 2025-05-17 00:35:09.761 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.828233 containerd[1461]: 2025-05-17 00:35:09.761 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid461390d1e3 ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.828233 containerd[1461]: 2025-05-17 00:35:09.765 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.828233 containerd[1461]: 2025-05-17 00:35:09.765 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0", GenerateName:"calico-kube-controllers-66b4cdbc55-", Namespace:"calico-system", SelfLink:"", UID:"9537133f-5e07-4b0f-93c4-cc1221685e83", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66b4cdbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64", Pod:"calico-kube-controllers-66b4cdbc55-74hhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid461390d1e3", MAC:"36:e5:0d:c5:82:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:09.828233 containerd[1461]: 2025-05-17 00:35:09.821 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64" Namespace="calico-system" Pod="calico-kube-controllers-66b4cdbc55-74hhx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:09.882748 containerd[1461]: time="2025-05-17T00:35:09.882661664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:35:09.882918 containerd[1461]: time="2025-05-17T00:35:09.882706388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:09.883447 kubelet[2510]: E0517 00:35:09.883218 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:09.883447 kubelet[2510]: E0517 00:35:09.883281 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:09.883725 kubelet[2510]: E0517 00:35:09.883557 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4w9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-vss2s_calico-system(1cf92987-bd0b-472f-a9b0-2d45c7497558): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:09.883829 containerd[1461]: time="2025-05-17T00:35:09.883701767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:35:09.885681 kubelet[2510]: E0517 00:35:09.885641 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:35:09.900578 containerd[1461]: time="2025-05-17T00:35:09.900461552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:09.900578 containerd[1461]: time="2025-05-17T00:35:09.900523658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:09.900578 containerd[1461]: time="2025-05-17T00:35:09.900551450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:09.900788 containerd[1461]: time="2025-05-17T00:35:09.900639066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:09.906003 systemd-networkd[1401]: cali6244fca7363: Gained IPv6LL May 17 00:35:09.920705 systemd[1]: Started cri-containerd-c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64.scope - libcontainer container c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64. May 17 00:35:09.932440 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:09.938381 containerd[1461]: time="2025-05-17T00:35:09.938346338Z" level=info msg="StopPodSandbox for \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\"" May 17 00:35:09.962055 containerd[1461]: time="2025-05-17T00:35:09.962000055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4cdbc55-74hhx,Uid:9537133f-5e07-4b0f-93c4-cc1221685e83,Namespace:calico-system,Attempt:1,} returns sandbox id \"c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64\"" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:09.983 [INFO][4555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:09.983 [INFO][4555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" iface="eth0" netns="/var/run/netns/cni-52c77f43-16e3-2b70-3f0c-540f6b79bc4b" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:09.984 [INFO][4555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" iface="eth0" netns="/var/run/netns/cni-52c77f43-16e3-2b70-3f0c-540f6b79bc4b" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:09.985 [INFO][4555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" iface="eth0" netns="/var/run/netns/cni-52c77f43-16e3-2b70-3f0c-540f6b79bc4b" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:09.985 [INFO][4555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:09.985 [INFO][4555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:10.008 [INFO][4571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:10.008 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:10.008 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:10.015 [WARNING][4571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:10.015 [INFO][4571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:10.017 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:10.023760 containerd[1461]: 2025-05-17 00:35:10.020 [INFO][4555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:10.024157 containerd[1461]: time="2025-05-17T00:35:10.023900966Z" level=info msg="TearDown network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\" successfully" May 17 00:35:10.024157 containerd[1461]: time="2025-05-17T00:35:10.023939108Z" level=info msg="StopPodSandbox for \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\" returns successfully" May 17 00:35:10.024843 containerd[1461]: time="2025-05-17T00:35:10.024804543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8pqj,Uid:be42aafd-fcc6-4236-98b3-c64eba42cdf6,Namespace:calico-system,Attempt:1,}" May 17 00:35:10.137605 systemd-networkd[1401]: cali1f2fb612957: Link UP May 17 00:35:10.138369 systemd-networkd[1401]: cali1f2fb612957: Gained carrier May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.061 [INFO][4580] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.073 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x8pqj-eth0 csi-node-driver- calico-system be42aafd-fcc6-4236-98b3-c64eba42cdf6 992 0 2025-05-17 00:34:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x8pqj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1f2fb612957 [] [] }} ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.073 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.100 [INFO][4594] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" HandleID="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.101 [INFO][4594] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" HandleID="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000117510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x8pqj", "timestamp":"2025-05-17 00:35:10.100926818 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.101 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.101 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.101 [INFO][4594] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.109 [INFO][4594] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.113 [INFO][4594] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.117 [INFO][4594] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.118 [INFO][4594] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.120 [INFO][4594] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.120 [INFO][4594] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.122 [INFO][4594] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.125 [INFO][4594] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.131 [INFO][4594] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.131 [INFO][4594] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" host="localhost" May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.131 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:10.152270 containerd[1461]: 2025-05-17 00:35:10.131 [INFO][4594] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" HandleID="k8s-pod-network.999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.153002 containerd[1461]: 2025-05-17 00:35:10.135 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x8pqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be42aafd-fcc6-4236-98b3-c64eba42cdf6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x8pqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f2fb612957", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:10.153002 containerd[1461]: 2025-05-17 00:35:10.135 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.153002 containerd[1461]: 2025-05-17 00:35:10.135 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f2fb612957 ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.153002 containerd[1461]: 2025-05-17 00:35:10.138 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.153002 containerd[1461]: 2025-05-17 00:35:10.139 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x8pqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be42aafd-fcc6-4236-98b3-c64eba42cdf6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be", Pod:"csi-node-driver-x8pqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f2fb612957", MAC:"a2:a6:27:1c:87:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:10.153002 containerd[1461]: 2025-05-17 00:35:10.148 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be" Namespace="calico-system" Pod="csi-node-driver-x8pqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:10.173327 containerd[1461]: time="2025-05-17T00:35:10.173194236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:10.173327 containerd[1461]: time="2025-05-17T00:35:10.173289575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:10.173560 containerd[1461]: time="2025-05-17T00:35:10.173310264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:10.173560 containerd[1461]: time="2025-05-17T00:35:10.173450467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:10.197704 systemd[1]: Started cri-containerd-999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be.scope - libcontainer container 999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be. May 17 00:35:10.218885 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:10.235218 containerd[1461]: time="2025-05-17T00:35:10.235137283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8pqj,Uid:be42aafd-fcc6-4236-98b3-c64eba42cdf6,Namespace:calico-system,Attempt:1,} returns sandbox id \"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be\"" May 17 00:35:10.403183 kubelet[2510]: E0517 00:35:10.403100 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:35:10.410574 kubelet[2510]: E0517 00:35:10.409857 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:10.415644 systemd[1]: run-netns-cni\x2d52c77f43\x2d16e3\x2d2b70\x2d3f0c\x2d540f6b79bc4b.mount: Deactivated successfully. May 17 00:35:10.464313 kubelet[2510]: I0517 00:35:10.464273 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:10.464727 kubelet[2510]: E0517 00:35:10.464699 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:10.736701 systemd-networkd[1401]: cali2a4a00c2d3f: Gained IPv6LL May 17 00:35:10.938242 containerd[1461]: time="2025-05-17T00:35:10.938185691Z" level=info msg="StopPodSandbox for \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\"" May 17 00:35:10.940139 containerd[1461]: time="2025-05-17T00:35:10.939377028Z" level=info msg="StopPodSandbox for \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\"" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.003 [INFO][4726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.003 [INFO][4726] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" iface="eth0" netns="/var/run/netns/cni-0be1c7c6-8910-f4e7-9f92-16b7749ab80e" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.003 [INFO][4726] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" iface="eth0" netns="/var/run/netns/cni-0be1c7c6-8910-f4e7-9f92-16b7749ab80e" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.003 [INFO][4726] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" iface="eth0" netns="/var/run/netns/cni-0be1c7c6-8910-f4e7-9f92-16b7749ab80e" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.003 [INFO][4726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.003 [INFO][4726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.034 [INFO][4746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.034 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.035 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.040 [WARNING][4746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.040 [INFO][4746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.042 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:11.048620 containerd[1461]: 2025-05-17 00:35:11.045 [INFO][4726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:11.053798 containerd[1461]: time="2025-05-17T00:35:11.050498742Z" level=info msg="TearDown network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\" successfully" May 17 00:35:11.053798 containerd[1461]: time="2025-05-17T00:35:11.052676782Z" level=info msg="StopPodSandbox for \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\" returns successfully" May 17 00:35:11.053921 kubelet[2510]: E0517 00:35:11.053034 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:11.053942 systemd[1]: run-netns-cni\x2d0be1c7c6\x2d8910\x2df4e7\x2d9f92\x2d16b7749ab80e.mount: Deactivated successfully. May 17 00:35:11.054854 containerd[1461]: time="2025-05-17T00:35:11.054301361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmrv9,Uid:c975f9da-6c98-4900-bbd2-08541503e92e,Namespace:kube-system,Attempt:1,}" May 17 00:35:11.068591 kernel: bpftool[4769]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:10.995 [INFO][4725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:10.995 [INFO][4725] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" iface="eth0" netns="/var/run/netns/cni-e948e318-97a6-242d-2e18-48cfc271caf0" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:10.996 [INFO][4725] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" iface="eth0" netns="/var/run/netns/cni-e948e318-97a6-242d-2e18-48cfc271caf0" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:10.998 [INFO][4725] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" iface="eth0" netns="/var/run/netns/cni-e948e318-97a6-242d-2e18-48cfc271caf0" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:10.998 [INFO][4725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:10.999 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:11.054 [INFO][4743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:11.055 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:11.055 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:11.061 [WARNING][4743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:11.061 [INFO][4743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:11.062 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:11.071673 containerd[1461]: 2025-05-17 00:35:11.066 [INFO][4725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:11.071673 containerd[1461]: time="2025-05-17T00:35:11.069778725Z" level=info msg="TearDown network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\" successfully" May 17 00:35:11.071673 containerd[1461]: time="2025-05-17T00:35:11.069805004Z" level=info msg="StopPodSandbox for \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\" returns successfully" May 17 00:35:11.073003 containerd[1461]: time="2025-05-17T00:35:11.072721159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-9t9c6,Uid:a5f3228d-9614-4416-8a88-802cb784679f,Namespace:calico-apiserver,Attempt:1,}" May 17 00:35:11.073381 systemd[1]: run-netns-cni\x2de948e318\x2d97a6\x2d242d\x2d2e18\x2d48cfc271caf0.mount: Deactivated successfully. May 17 00:35:11.185659 systemd-networkd[1401]: cali1f2fb612957: Gained IPv6LL May 17 00:35:11.278689 systemd-networkd[1401]: calic731d5a6843: Link UP May 17 00:35:11.279204 systemd-networkd[1401]: calic731d5a6843: Gained carrier May 17 00:35:11.313047 systemd-networkd[1401]: calid461390d1e3: Gained IPv6LL May 17 00:35:11.377497 systemd-networkd[1401]: vxlan.calico: Link UP May 17 00:35:11.377508 systemd-networkd[1401]: vxlan.calico: Gained carrier May 17 00:35:11.413054 kubelet[2510]: E0517 00:35:11.413023 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:11.414574 kubelet[2510]: E0517 00:35:11.414342 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:35:11.414934 kubelet[2510]: E0517 00:35:11.414918 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.137 [INFO][4770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0 coredns-668d6bf9bc- kube-system c975f9da-6c98-4900-bbd2-08541503e92e 1015 0 2025-05-17 00:34:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-fmrv9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic731d5a6843 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.137 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.167 [INFO][4799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" HandleID="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.167 [INFO][4799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" HandleID="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e2aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-fmrv9", "timestamp":"2025-05-17 00:35:11.167093816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.167 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.167 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.167 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.173 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.176 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.180 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.181 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.183 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.183 [INFO][4799] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.190 [INFO][4799] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.206 [INFO][4799] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.273 [INFO][4799] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.273 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" host="localhost" May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.273 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:11.464623 containerd[1461]: 2025-05-17 00:35:11.273 [INFO][4799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" HandleID="k8s-pod-network.9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.465381 containerd[1461]: 2025-05-17 00:35:11.276 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c975f9da-6c98-4900-bbd2-08541503e92e", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-fmrv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic731d5a6843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:11.465381 containerd[1461]: 2025-05-17 00:35:11.276 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.465381 containerd[1461]: 2025-05-17 00:35:11.276 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic731d5a6843 ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.465381 containerd[1461]: 2025-05-17 00:35:11.279 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.465381 containerd[1461]: 2025-05-17 00:35:11.279 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c975f9da-6c98-4900-bbd2-08541503e92e", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e", Pod:"coredns-668d6bf9bc-fmrv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic731d5a6843", MAC:"9e:cc:e0:22:c0:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:11.465381 containerd[1461]: 2025-05-17 00:35:11.456 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fmrv9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:11.504683 systemd-networkd[1401]: cali546dfe2574f: Gained IPv6LL May 17 00:35:11.625005 containerd[1461]: time="2025-05-17T00:35:11.624291291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:11.625161 containerd[1461]: time="2025-05-17T00:35:11.624978161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:11.625161 containerd[1461]: time="2025-05-17T00:35:11.624991686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:11.625161 containerd[1461]: time="2025-05-17T00:35:11.625079711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:11.652685 systemd[1]: Started cri-containerd-9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e.scope - libcontainer container 9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e. May 17 00:35:11.666221 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:11.692160 containerd[1461]: time="2025-05-17T00:35:11.691994394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmrv9,Uid:c975f9da-6c98-4900-bbd2-08541503e92e,Namespace:kube-system,Attempt:1,} returns sandbox id \"9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e\"" May 17 00:35:11.693775 kubelet[2510]: E0517 00:35:11.693633 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:11.695877 containerd[1461]: time="2025-05-17T00:35:11.695747640Z" level=info msg="CreateContainer within sandbox \"9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:35:12.178926 systemd-networkd[1401]: cali9d520b61ad7: Link UP May 17 00:35:12.179095 systemd-networkd[1401]: cali9d520b61ad7: Gained carrier May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.138 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0 calico-apiserver-698b9c5d64- calico-apiserver a5f3228d-9614-4416-8a88-802cb784679f 1014 0 2025-05-17 00:34:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698b9c5d64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-698b9c5d64-9t9c6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d520b61ad7 [] [] }} ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.138 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.173 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" HandleID="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.173 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" HandleID="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-698b9c5d64-9t9c6", "timestamp":"2025-05-17 00:35:11.173155675 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.173 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.276 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.276 [INFO][4801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.458 [INFO][4801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.750 [INFO][4801] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.988 [INFO][4801] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:11.990 [INFO][4801] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.021 [INFO][4801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.021 [INFO][4801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.022 [INFO][4801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2 May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.125 [INFO][4801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.172 [INFO][4801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.172 [INFO][4801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" host="localhost" May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.172 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:12.208668 containerd[1461]: 2025-05-17 00:35:12.172 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" HandleID="k8s-pod-network.1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:12.209435 containerd[1461]: 2025-05-17 00:35:12.176 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f3228d-9614-4416-8a88-802cb784679f", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-698b9c5d64-9t9c6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d520b61ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:12.209435 containerd[1461]: 2025-05-17 00:35:12.176 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:12.209435 containerd[1461]: 2025-05-17 00:35:12.176 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d520b61ad7 ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:12.209435 containerd[1461]: 2025-05-17 00:35:12.178 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:12.209435 containerd[1461]: 2025-05-17 00:35:12.178 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f3228d-9614-4416-8a88-802cb784679f", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2", Pod:"calico-apiserver-698b9c5d64-9t9c6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d520b61ad7", MAC:"56:5b:38:2d:ee:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:12.209435 containerd[1461]: 2025-05-17 00:35:12.204 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2" Namespace="calico-apiserver" Pod="calico-apiserver-698b9c5d64-9t9c6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:12.240581 containerd[1461]: time="2025-05-17T00:35:12.240223933Z" level=info msg="CreateContainer within sandbox \"9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16c2657f6066704597b5a0a8d95df116610770b370d204d29fd77550c9f9ee59\"" May 17 00:35:12.241128 containerd[1461]: time="2025-05-17T00:35:12.241075964Z" level=info msg="StartContainer for \"16c2657f6066704597b5a0a8d95df116610770b370d204d29fd77550c9f9ee59\"" May 17 00:35:12.271655 containerd[1461]: time="2025-05-17T00:35:12.270509032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:12.271655 containerd[1461]: time="2025-05-17T00:35:12.270587730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:12.271655 containerd[1461]: time="2025-05-17T00:35:12.270601636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:12.271655 containerd[1461]: time="2025-05-17T00:35:12.270714919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:12.281046 systemd[1]: Started cri-containerd-16c2657f6066704597b5a0a8d95df116610770b370d204d29fd77550c9f9ee59.scope - libcontainer container 16c2657f6066704597b5a0a8d95df116610770b370d204d29fd77550c9f9ee59. May 17 00:35:12.298969 systemd[1]: Started cri-containerd-1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2.scope - libcontainer container 1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2. May 17 00:35:12.330617 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:12.334771 containerd[1461]: time="2025-05-17T00:35:12.334706979Z" level=info msg="StartContainer for \"16c2657f6066704597b5a0a8d95df116610770b370d204d29fd77550c9f9ee59\" returns successfully" May 17 00:35:12.379243 containerd[1461]: time="2025-05-17T00:35:12.379038424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b9c5d64-9t9c6,Uid:a5f3228d-9614-4416-8a88-802cb784679f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2\"" May 17 00:35:12.420837 kubelet[2510]: E0517 00:35:12.420721 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:12.570788 kubelet[2510]: I0517 00:35:12.570642 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fmrv9" podStartSLOduration=46.570624293 podStartE2EDuration="46.570624293s" podCreationTimestamp="2025-05-17 00:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:12.568346127 +0000 UTC m=+51.707354420" watchObservedRunningTime="2025-05-17 00:35:12.570624293 +0000 UTC m=+51.709632576" May 17 00:35:12.908569 containerd[1461]: time="2025-05-17T00:35:12.908505426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:12.909418 containerd[1461]: time="2025-05-17T00:35:12.909207043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:35:12.910417 containerd[1461]: time="2025-05-17T00:35:12.910380906Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:12.912907 containerd[1461]: time="2025-05-17T00:35:12.912616673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:12.913417 containerd[1461]: time="2025-05-17T00:35:12.913382190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.029642822s" May 17 00:35:12.913454 containerd[1461]: time="2025-05-17T00:35:12.913428297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:35:12.914276 containerd[1461]: time="2025-05-17T00:35:12.914256792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:35:12.915254 containerd[1461]: time="2025-05-17T00:35:12.915220049Z" level=info msg="CreateContainer within sandbox \"25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:35:12.976676 systemd-networkd[1401]: calic731d5a6843: Gained IPv6LL May 17 00:35:13.009845 containerd[1461]: time="2025-05-17T00:35:13.009804084Z" level=info msg="CreateContainer within sandbox \"25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b85cc669be1d0aff0ee00955beb96cccac198a4c3d5208c6e560091952b79237\"" May 17 00:35:13.010356 containerd[1461]: time="2025-05-17T00:35:13.010326745Z" level=info msg="StartContainer for \"b85cc669be1d0aff0ee00955beb96cccac198a4c3d5208c6e560091952b79237\"" May 17 00:35:13.043744 systemd[1]: Started cri-containerd-b85cc669be1d0aff0ee00955beb96cccac198a4c3d5208c6e560091952b79237.scope - libcontainer container b85cc669be1d0aff0ee00955beb96cccac198a4c3d5208c6e560091952b79237. May 17 00:35:13.296758 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL May 17 00:35:13.552711 systemd-networkd[1401]: cali9d520b61ad7: Gained IPv6LL May 17 00:35:13.561700 containerd[1461]: time="2025-05-17T00:35:13.561233011Z" level=info msg="StartContainer for \"b85cc669be1d0aff0ee00955beb96cccac198a4c3d5208c6e560091952b79237\" returns successfully" May 17 00:35:13.564225 kubelet[2510]: E0517 00:35:13.564192 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:14.565982 kubelet[2510]: E0517 00:35:14.565896 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:14.577495 kubelet[2510]: I0517 00:35:14.577420 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-698b9c5d64-6nfp6" podStartSLOduration=34.270635072 podStartE2EDuration="37.577397543s" podCreationTimestamp="2025-05-17 00:34:37 +0000 UTC" firstStartedPulling="2025-05-17 00:35:09.607399012 +0000 UTC m=+48.746407295" lastFinishedPulling="2025-05-17 00:35:12.914161483 +0000 UTC m=+52.053169766" observedRunningTime="2025-05-17 00:35:14.576665751 +0000 UTC m=+53.715674034" watchObservedRunningTime="2025-05-17 00:35:14.577397543 +0000 UTC m=+53.716405836" May 17 00:35:14.581856 systemd[1]: Started sshd@7-10.0.0.5:22-10.0.0.1:57290.service - OpenSSH per-connection server daemon (10.0.0.1:57290). May 17 00:35:14.654950 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 57290 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:14.657523 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:14.663746 systemd-logind[1447]: New session 8 of user core. May 17 00:35:14.668690 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:35:14.809910 sshd[5100]: pam_unix(sshd:session): session closed for user core May 17 00:35:14.813733 systemd[1]: sshd@7-10.0.0.5:22-10.0.0.1:57290.service: Deactivated successfully. May 17 00:35:14.815767 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:35:14.816314 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. May 17 00:35:14.817238 systemd-logind[1447]: Removed session 8. May 17 00:35:16.847391 containerd[1461]: time="2025-05-17T00:35:16.847331514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:16.848412 containerd[1461]: time="2025-05-17T00:35:16.848353862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:35:16.849728 containerd[1461]: time="2025-05-17T00:35:16.849691964Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:16.852174 containerd[1461]: time="2025-05-17T00:35:16.852143285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:16.853283 containerd[1461]: time="2025-05-17T00:35:16.852769440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.938419003s" May 17 00:35:16.853283 containerd[1461]: time="2025-05-17T00:35:16.852824864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:35:16.853850 containerd[1461]: time="2025-05-17T00:35:16.853820372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:35:16.863835 containerd[1461]: time="2025-05-17T00:35:16.863792367Z" level=info msg="CreateContainer within sandbox \"c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:35:16.880956 containerd[1461]: time="2025-05-17T00:35:16.880901969Z" level=info msg="CreateContainer within sandbox \"c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"705595b1d077c07c6243628446c5b39977cd986a8c95639156afc9bfcf31c428\"" May 17 00:35:16.881619 containerd[1461]: time="2025-05-17T00:35:16.881492026Z" level=info msg="StartContainer for \"705595b1d077c07c6243628446c5b39977cd986a8c95639156afc9bfcf31c428\"" May 17 00:35:16.911693 systemd[1]: Started cri-containerd-705595b1d077c07c6243628446c5b39977cd986a8c95639156afc9bfcf31c428.scope - libcontainer container 705595b1d077c07c6243628446c5b39977cd986a8c95639156afc9bfcf31c428. May 17 00:35:17.013768 containerd[1461]: time="2025-05-17T00:35:17.013721794Z" level=info msg="StartContainer for \"705595b1d077c07c6243628446c5b39977cd986a8c95639156afc9bfcf31c428\" returns successfully" May 17 00:35:17.589576 kubelet[2510]: I0517 00:35:17.587215 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66b4cdbc55-74hhx" podStartSLOduration=29.696742296 podStartE2EDuration="36.587193744s" podCreationTimestamp="2025-05-17 00:34:41 +0000 UTC" firstStartedPulling="2025-05-17 00:35:09.963193355 +0000 UTC m=+49.102201638" lastFinishedPulling="2025-05-17 00:35:16.853644803 +0000 UTC m=+55.992653086" observedRunningTime="2025-05-17 00:35:17.58608821 +0000 UTC m=+56.725096493" watchObservedRunningTime="2025-05-17 00:35:17.587193744 +0000 UTC m=+56.726202037" May 17 00:35:19.826041 systemd[1]: Started sshd@8-10.0.0.5:22-10.0.0.1:57298.service - OpenSSH per-connection server daemon (10.0.0.1:57298). May 17 00:35:19.928791 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 57298 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:19.930402 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:19.936565 systemd-logind[1447]: New session 9 of user core. May 17 00:35:19.946818 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:35:19.993573 containerd[1461]: time="2025-05-17T00:35:19.993495861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:19.994406 containerd[1461]: time="2025-05-17T00:35:19.994357985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:35:19.995631 containerd[1461]: time="2025-05-17T00:35:19.995598895Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:19.998626 containerd[1461]: time="2025-05-17T00:35:19.998594823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:19.999588 containerd[1461]: time="2025-05-17T00:35:19.999521161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 3.14566956s" May 17 00:35:19.999588 containerd[1461]: time="2025-05-17T00:35:19.999579464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:35:20.001368 containerd[1461]: time="2025-05-17T00:35:20.001322830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:35:20.003121 containerd[1461]: time="2025-05-17T00:35:20.003095629Z" level=info msg="CreateContainer within sandbox \"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:35:20.027869 containerd[1461]: time="2025-05-17T00:35:20.027785168Z" level=info msg="CreateContainer within sandbox \"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"55bdad2d1e60fbf27630b848bda252fab80a5ce59355c87aef2ae9121cb08347\"" May 17 00:35:20.028900 containerd[1461]: time="2025-05-17T00:35:20.028864933Z" level=info msg="StartContainer for \"55bdad2d1e60fbf27630b848bda252fab80a5ce59355c87aef2ae9121cb08347\"" May 17 00:35:20.090815 systemd[1]: Started cri-containerd-55bdad2d1e60fbf27630b848bda252fab80a5ce59355c87aef2ae9121cb08347.scope - libcontainer container 55bdad2d1e60fbf27630b848bda252fab80a5ce59355c87aef2ae9121cb08347. May 17 00:35:20.121238 sshd[5203]: pam_unix(sshd:session): session closed for user core May 17 00:35:20.128768 systemd[1]: sshd@8-10.0.0.5:22-10.0.0.1:57298.service: Deactivated successfully. May 17 00:35:20.131927 containerd[1461]: time="2025-05-17T00:35:20.131876758Z" level=info msg="StartContainer for \"55bdad2d1e60fbf27630b848bda252fab80a5ce59355c87aef2ae9121cb08347\" returns successfully" May 17 00:35:20.132111 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:35:20.134541 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. May 17 00:35:20.135560 systemd-logind[1447]: Removed session 9. May 17 00:35:20.401026 containerd[1461]: time="2025-05-17T00:35:20.400952221Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:20.402061 containerd[1461]: time="2025-05-17T00:35:20.401971919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:35:20.404204 containerd[1461]: time="2025-05-17T00:35:20.404158682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 402.796304ms" May 17 00:35:20.404204 containerd[1461]: time="2025-05-17T00:35:20.404193809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:35:20.411977 containerd[1461]: time="2025-05-17T00:35:20.411788493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:35:20.413710 containerd[1461]: time="2025-05-17T00:35:20.413661607Z" level=info msg="CreateContainer within sandbox \"1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:35:20.429940 containerd[1461]: time="2025-05-17T00:35:20.429887476Z" level=info msg="CreateContainer within sandbox \"1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1ce093db47c4cdf947f44a2b4647b1c8d83daa5696370737934b5820320aed33\"" May 17 00:35:20.430565 containerd[1461]: time="2025-05-17T00:35:20.430518921Z" level=info msg="StartContainer for \"1ce093db47c4cdf947f44a2b4647b1c8d83daa5696370737934b5820320aed33\"" May 17 00:35:20.461768 systemd[1]: Started cri-containerd-1ce093db47c4cdf947f44a2b4647b1c8d83daa5696370737934b5820320aed33.scope - libcontainer container 1ce093db47c4cdf947f44a2b4647b1c8d83daa5696370737934b5820320aed33. May 17 00:35:20.507849 containerd[1461]: time="2025-05-17T00:35:20.507795148Z" level=info msg="StartContainer for \"1ce093db47c4cdf947f44a2b4647b1c8d83daa5696370737934b5820320aed33\" returns successfully" May 17 00:35:20.928454 containerd[1461]: time="2025-05-17T00:35:20.928407717Z" level=info msg="StopPodSandbox for \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\"" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:20.983 [WARNING][5310] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"bef19986-6d7f-4327-9173-74879321bea4", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986", Pod:"calico-apiserver-698b9c5d64-6nfp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali546dfe2574f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:20.984 [INFO][5310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:20.984 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" iface="eth0" netns="" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:20.984 [INFO][5310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:20.984 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:21.006 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:21.006 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:21.006 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:21.015 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:21.015 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:21.017 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.026174 containerd[1461]: 2025-05-17 00:35:21.021 [INFO][5310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.026999 containerd[1461]: time="2025-05-17T00:35:21.026212356Z" level=info msg="TearDown network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\" successfully" May 17 00:35:21.026999 containerd[1461]: time="2025-05-17T00:35:21.026263485Z" level=info msg="StopPodSandbox for \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\" returns successfully" May 17 00:35:21.026999 containerd[1461]: time="2025-05-17T00:35:21.026863038Z" level=info msg="RemovePodSandbox for \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\"" May 17 00:35:21.029061 containerd[1461]: time="2025-05-17T00:35:21.029036701Z" level=info msg="Forcibly stopping sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\"" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.139 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"bef19986-6d7f-4327-9173-74879321bea4", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25dac39e924419adccf950e7c1e56160b0daa7ee185c5a002135b49a94a1f986", Pod:"calico-apiserver-698b9c5d64-6nfp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali546dfe2574f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.139 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.139 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" iface="eth0" netns="" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.139 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.139 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.165 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.165 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.165 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.233 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.233 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" HandleID="k8s-pod-network.0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" Workload="localhost-k8s-calico--apiserver--698b9c5d64--6nfp6-eth0" May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.236 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.242882 containerd[1461]: 2025-05-17 00:35:21.240 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59" May 17 00:35:21.242882 containerd[1461]: time="2025-05-17T00:35:21.242820707Z" level=info msg="TearDown network for sandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\" successfully" May 17 00:35:21.280511 containerd[1461]: time="2025-05-17T00:35:21.280453479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:21.280695 containerd[1461]: time="2025-05-17T00:35:21.280579083Z" level=info msg="RemovePodSandbox \"0708634c078edac12438d5e2c7878dac166a81e9a43ae2a2b9d22b575904ae59\" returns successfully" May 17 00:35:21.281235 containerd[1461]: time="2025-05-17T00:35:21.281193854Z" level=info msg="StopPodSandbox for \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\"" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.319 [WARNING][5370] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x8pqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be42aafd-fcc6-4236-98b3-c64eba42cdf6", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be", Pod:"csi-node-driver-x8pqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f2fb612957", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.319 [INFO][5370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.319 [INFO][5370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" iface="eth0" netns="" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.319 [INFO][5370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.319 [INFO][5370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.345 [INFO][5379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.346 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.346 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.354 [WARNING][5379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.354 [INFO][5379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.356 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.365066 containerd[1461]: 2025-05-17 00:35:21.361 [INFO][5370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.365854 containerd[1461]: time="2025-05-17T00:35:21.365085538Z" level=info msg="TearDown network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\" successfully" May 17 00:35:21.365854 containerd[1461]: time="2025-05-17T00:35:21.365111958Z" level=info msg="StopPodSandbox for \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\" returns successfully" May 17 00:35:21.365854 containerd[1461]: time="2025-05-17T00:35:21.365649110Z" level=info msg="RemovePodSandbox for \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\"" May 17 00:35:21.365854 containerd[1461]: time="2025-05-17T00:35:21.365670972Z" level=info msg="Forcibly stopping sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\"" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.404 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x8pqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be42aafd-fcc6-4236-98b3-c64eba42cdf6", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be", Pod:"csi-node-driver-x8pqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f2fb612957", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.405 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.405 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" iface="eth0" netns="" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.405 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.405 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.429 [INFO][5405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.430 [INFO][5405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.430 [INFO][5405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.437 [WARNING][5405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.437 [INFO][5405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" HandleID="k8s-pod-network.5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" Workload="localhost-k8s-csi--node--driver--x8pqj-eth0" May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.443 [INFO][5405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.449613 containerd[1461]: 2025-05-17 00:35:21.446 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b" May 17 00:35:21.450128 containerd[1461]: time="2025-05-17T00:35:21.449668220Z" level=info msg="TearDown network for sandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\" successfully" May 17 00:35:21.487335 containerd[1461]: time="2025-05-17T00:35:21.487275382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:21.487498 containerd[1461]: time="2025-05-17T00:35:21.487344666Z" level=info msg="RemovePodSandbox \"5a5be781f33ab7900740899232ee2ea3059d6356b0bc33dcc18a6be377c1dc3b\" returns successfully" May 17 00:35:21.487881 containerd[1461]: time="2025-05-17T00:35:21.487831820Z" level=info msg="StopPodSandbox for \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\"" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.527 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f3228d-9614-4416-8a88-802cb784679f", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2", Pod:"calico-apiserver-698b9c5d64-9t9c6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d520b61ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.527 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.527 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" iface="eth0" netns="" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.527 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.527 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.552 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.552 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.552 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.557 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.557 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.558 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.569408 containerd[1461]: 2025-05-17 00:35:21.565 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.569408 containerd[1461]: time="2025-05-17T00:35:21.569074499Z" level=info msg="TearDown network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\" successfully" May 17 00:35:21.569408 containerd[1461]: time="2025-05-17T00:35:21.569134125Z" level=info msg="StopPodSandbox for \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\" returns successfully" May 17 00:35:21.571092 containerd[1461]: time="2025-05-17T00:35:21.571048094Z" level=info msg="RemovePodSandbox for \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\"" May 17 00:35:21.571092 containerd[1461]: time="2025-05-17T00:35:21.571081489Z" level=info msg="Forcibly stopping sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\"" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.618 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0", GenerateName:"calico-apiserver-698b9c5d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5f3228d-9614-4416-8a88-802cb784679f", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b9c5d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8d11e6e73bac880d46075d9964b8c49539dd5c0e9acacb0bf2d9488c00aaf2", Pod:"calico-apiserver-698b9c5d64-9t9c6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d520b61ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.618 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.618 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" iface="eth0" netns="" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.618 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.618 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.639 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.639 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.639 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.644 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.644 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" HandleID="k8s-pod-network.0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" Workload="localhost-k8s-calico--apiserver--698b9c5d64--9t9c6-eth0" May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.645 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.651721 containerd[1461]: 2025-05-17 00:35:21.648 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539" May 17 00:35:21.652181 containerd[1461]: time="2025-05-17T00:35:21.651746498Z" level=info msg="TearDown network for sandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\" successfully" May 17 00:35:21.675104 containerd[1461]: time="2025-05-17T00:35:21.675043112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:21.675186 containerd[1461]: time="2025-05-17T00:35:21.675104642Z" level=info msg="RemovePodSandbox \"0396068ce0d07ee21f9b0bfad1ca65f7616c26af94690eace3f234170f145539\" returns successfully" May 17 00:35:21.675631 containerd[1461]: time="2025-05-17T00:35:21.675607747Z" level=info msg="StopPodSandbox for \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\"" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.710 [WARNING][5476] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c975f9da-6c98-4900-bbd2-08541503e92e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e", Pod:"coredns-668d6bf9bc-fmrv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic731d5a6843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.711 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.711 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" iface="eth0" netns="" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.711 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.711 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.735 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.735 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.736 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.741 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.741 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.742 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.748848 containerd[1461]: 2025-05-17 00:35:21.745 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.749412 containerd[1461]: time="2025-05-17T00:35:21.748896278Z" level=info msg="TearDown network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\" successfully" May 17 00:35:21.749412 containerd[1461]: time="2025-05-17T00:35:21.748929373Z" level=info msg="StopPodSandbox for \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\" returns successfully" May 17 00:35:21.749412 containerd[1461]: time="2025-05-17T00:35:21.749404022Z" level=info msg="RemovePodSandbox for \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\"" May 17 00:35:21.749477 containerd[1461]: time="2025-05-17T00:35:21.749430774Z" level=info msg="Forcibly stopping sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\"" May 17 00:35:21.822080 kubelet[2510]: I0517 00:35:21.821510 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-698b9c5d64-9t9c6" podStartSLOduration=36.790988244 podStartE2EDuration="44.821487228s" podCreationTimestamp="2025-05-17 00:34:37 +0000 UTC" firstStartedPulling="2025-05-17 00:35:12.381139719 +0000 UTC m=+51.520148002" lastFinishedPulling="2025-05-17 00:35:20.411638703 +0000 UTC m=+59.550646986" observedRunningTime="2025-05-17 00:35:20.6008775 +0000 UTC m=+59.739885783" watchObservedRunningTime="2025-05-17 00:35:21.821487228 +0000 UTC m=+60.960495511" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.789 [WARNING][5503] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c975f9da-6c98-4900-bbd2-08541503e92e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9abbb99a6517c047add811090e18f9a533e77c7372374f24e73f2e5242584e0e", Pod:"coredns-668d6bf9bc-fmrv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic731d5a6843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.789 [INFO][5503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.789 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" iface="eth0" netns="" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.789 [INFO][5503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.789 [INFO][5503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.810 [INFO][5511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.811 [INFO][5511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.811 [INFO][5511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.819 [WARNING][5511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.819 [INFO][5511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" HandleID="k8s-pod-network.442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" Workload="localhost-k8s-coredns--668d6bf9bc--fmrv9-eth0" May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.821 [INFO][5511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.832597 containerd[1461]: 2025-05-17 00:35:21.828 [INFO][5503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e" May 17 00:35:21.833039 containerd[1461]: time="2025-05-17T00:35:21.832652539Z" level=info msg="TearDown network for sandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\" successfully" May 17 00:35:21.892888 containerd[1461]: time="2025-05-17T00:35:21.892843965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:21.893067 containerd[1461]: time="2025-05-17T00:35:21.892905564Z" level=info msg="RemovePodSandbox \"442c17652b3dd1a7c4ba310f407ceb13c0108ebd56fce19da4b22c3e2b4e590e\" returns successfully" May 17 00:35:21.893524 containerd[1461]: time="2025-05-17T00:35:21.893485247Z" level=info msg="StopPodSandbox for \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\"" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.930 [WARNING][5530] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0", GenerateName:"calico-kube-controllers-66b4cdbc55-", Namespace:"calico-system", SelfLink:"", UID:"9537133f-5e07-4b0f-93c4-cc1221685e83", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66b4cdbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64", Pod:"calico-kube-controllers-66b4cdbc55-74hhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid461390d1e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.931 [INFO][5530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.931 [INFO][5530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" iface="eth0" netns="" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.931 [INFO][5530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.931 [INFO][5530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.952 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.952 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.952 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.957 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.957 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.958 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.964054 containerd[1461]: 2025-05-17 00:35:21.960 [INFO][5530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:21.964465 containerd[1461]: time="2025-05-17T00:35:21.964083996Z" level=info msg="TearDown network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\" successfully" May 17 00:35:21.964465 containerd[1461]: time="2025-05-17T00:35:21.964119435Z" level=info msg="StopPodSandbox for \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\" returns successfully" May 17 00:35:21.964570 containerd[1461]: time="2025-05-17T00:35:21.964523007Z" level=info msg="RemovePodSandbox for \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\"" May 17 00:35:21.964570 containerd[1461]: time="2025-05-17T00:35:21.964561842Z" level=info msg="Forcibly stopping sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\"" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.000 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0", GenerateName:"calico-kube-controllers-66b4cdbc55-", Namespace:"calico-system", SelfLink:"", UID:"9537133f-5e07-4b0f-93c4-cc1221685e83", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66b4cdbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c597bbe1b49a08ac76d9e163443424d8872ea7e4aff3d6fd63144e00a135ac64", Pod:"calico-kube-controllers-66b4cdbc55-74hhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid461390d1e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.001 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.001 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" iface="eth0" netns="" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.001 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.001 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.023 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.023 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.023 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.029 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.029 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" HandleID="k8s-pod-network.4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" Workload="localhost-k8s-calico--kube--controllers--66b4cdbc55--74hhx-eth0" May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.031 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.037338 containerd[1461]: 2025-05-17 00:35:22.034 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6" May 17 00:35:22.038025 containerd[1461]: time="2025-05-17T00:35:22.037376954Z" level=info msg="TearDown network for sandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\" successfully" May 17 00:35:22.104188 containerd[1461]: time="2025-05-17T00:35:22.104100918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:22.104348 containerd[1461]: time="2025-05-17T00:35:22.104196964Z" level=info msg="RemovePodSandbox \"4758d480460f992bfead16de0b498a8ff733faf92915313ed1f34a1c9ad27dd6\" returns successfully" May 17 00:35:22.104784 containerd[1461]: time="2025-05-17T00:35:22.104749233Z" level=info msg="StopPodSandbox for \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\"" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.154 [WARNING][5589] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0b2a5ee-c039-427e-9a8b-ca7df66976a4", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd", Pod:"coredns-668d6bf9bc-wd2nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6244fca7363", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.154 [INFO][5589] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.154 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" iface="eth0" netns="" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.154 [INFO][5589] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.155 [INFO][5589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.176 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.177 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.177 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.183 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.183 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.185 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.190636 containerd[1461]: 2025-05-17 00:35:22.187 [INFO][5589] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.191129 containerd[1461]: time="2025-05-17T00:35:22.190672557Z" level=info msg="TearDown network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\" successfully" May 17 00:35:22.191129 containerd[1461]: time="2025-05-17T00:35:22.190696102Z" level=info msg="StopPodSandbox for \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\" returns successfully" May 17 00:35:22.191184 containerd[1461]: time="2025-05-17T00:35:22.191156263Z" level=info msg="RemovePodSandbox for \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\"" May 17 00:35:22.191184 containerd[1461]: time="2025-05-17T00:35:22.191178445Z" level=info msg="Forcibly stopping sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\"" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.259 [WARNING][5616] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0b2a5ee-c039-427e-9a8b-ca7df66976a4", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f86f1b1a0b5facc03100fd3a4f407e1f09307c1dabcbea73174ae6e8ce581dd", Pod:"coredns-668d6bf9bc-wd2nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6244fca7363", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.259 [INFO][5616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.259 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" iface="eth0" netns="" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.259 [INFO][5616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.259 [INFO][5616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.284 [INFO][5627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.284 [INFO][5627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.284 [INFO][5627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.291 [WARNING][5627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.292 [INFO][5627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" HandleID="k8s-pod-network.532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" Workload="localhost-k8s-coredns--668d6bf9bc--wd2nk-eth0" May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.293 [INFO][5627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.300404 containerd[1461]: 2025-05-17 00:35:22.296 [INFO][5616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485" May 17 00:35:22.303053 containerd[1461]: time="2025-05-17T00:35:22.300424124Z" level=info msg="TearDown network for sandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\" successfully" May 17 00:35:22.356348 containerd[1461]: time="2025-05-17T00:35:22.356226066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:22.356348 containerd[1461]: time="2025-05-17T00:35:22.356305860Z" level=info msg="RemovePodSandbox \"532e9d11852f13b3d2c00eb416cabe4f84d4becc77bbeb5455f1288f7e8b1485\" returns successfully" May 17 00:35:22.356875 containerd[1461]: time="2025-05-17T00:35:22.356823924Z" level=info msg="StopPodSandbox for \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\"" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.405 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1cf92987-bd0b-472f-a9b0-2d45c7497558", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6", Pod:"goldmane-78d55f7ddc-vss2s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a4a00c2d3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.406 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.406 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" iface="eth0" netns="" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.406 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.406 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.428 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.428 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.428 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.436 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.436 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.438 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.444637 containerd[1461]: 2025-05-17 00:35:22.441 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.445084 containerd[1461]: time="2025-05-17T00:35:22.444685961Z" level=info msg="TearDown network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\" successfully" May 17 00:35:22.445084 containerd[1461]: time="2025-05-17T00:35:22.444716280Z" level=info msg="StopPodSandbox for \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\" returns successfully" May 17 00:35:22.445263 containerd[1461]: time="2025-05-17T00:35:22.445239073Z" level=info msg="RemovePodSandbox for \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\"" May 17 00:35:22.445293 containerd[1461]: time="2025-05-17T00:35:22.445267017Z" level=info msg="Forcibly stopping sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\"" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.477 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1cf92987-bd0b-472f-a9b0-2d45c7497558", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d3604f11a2680d4b48f54337b7842024d1fad4152b0732c5b4c5e9f1fd6b9f6", Pod:"goldmane-78d55f7ddc-vss2s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a4a00c2d3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.477 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.477 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" iface="eth0" netns="" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.477 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.477 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.502 [INFO][5684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.502 [INFO][5684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.503 [INFO][5684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.509 [WARNING][5684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.509 [INFO][5684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" HandleID="k8s-pod-network.ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" Workload="localhost-k8s-goldmane--78d55f7ddc--vss2s-eth0" May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.511 [INFO][5684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.517882 containerd[1461]: 2025-05-17 00:35:22.514 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee" May 17 00:35:22.518390 containerd[1461]: time="2025-05-17T00:35:22.517907257Z" level=info msg="TearDown network for sandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\" successfully" May 17 00:35:22.600907 containerd[1461]: time="2025-05-17T00:35:22.600853345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:22.601082 containerd[1461]: time="2025-05-17T00:35:22.600937668Z" level=info msg="RemovePodSandbox \"ba1d9c12ed57cacd84495447007332546a5ba43473070d64e0b33029e9d739ee\" returns successfully" May 17 00:35:22.601275 containerd[1461]: time="2025-05-17T00:35:22.601237278Z" level=info msg="StopPodSandbox for \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\"" May 17 00:35:22.605287 containerd[1461]: time="2025-05-17T00:35:22.605223888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:22.643162 containerd[1461]: time="2025-05-17T00:35:22.642962200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:35:22.670627 containerd[1461]: time="2025-05-17T00:35:22.670580092Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.638 [WARNING][5703] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" WorkloadEndpoint="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.638 [INFO][5703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.638 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" iface="eth0" netns="" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.638 [INFO][5703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.638 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.658 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.658 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.658 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.727 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.727 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.730 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.736814 containerd[1461]: 2025-05-17 00:35:22.734 [INFO][5703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.737246 containerd[1461]: time="2025-05-17T00:35:22.736859415Z" level=info msg="TearDown network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\" successfully" May 17 00:35:22.737246 containerd[1461]: time="2025-05-17T00:35:22.736889002Z" level=info msg="StopPodSandbox for \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\" returns successfully" May 17 00:35:22.737405 containerd[1461]: time="2025-05-17T00:35:22.737370705Z" level=info msg="RemovePodSandbox for \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\"" May 17 00:35:22.737443 containerd[1461]: time="2025-05-17T00:35:22.737404390Z" level=info msg="Forcibly stopping sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\"" May 17 00:35:22.744149 containerd[1461]: time="2025-05-17T00:35:22.744093424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:35:22.745063 containerd[1461]: time="2025-05-17T00:35:22.745010700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.333172461s" May 17 00:35:22.745063 containerd[1461]: time="2025-05-17T00:35:22.745057691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:35:22.747561 containerd[1461]: time="2025-05-17T00:35:22.747504139Z" level=info msg="CreateContainer within sandbox \"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:35:22.868139 containerd[1461]: time="2025-05-17T00:35:22.868084496Z" level=info msg="CreateContainer within sandbox \"999fd1e56c99be7d1fbd503d25f6ef69baea0c0e0f672d3719284c5b1b6e06be\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"171907a69a0747fd913f85b19d412150dce04eafe284a73c04979b769360cb66\"" May 17 00:35:22.871173 containerd[1461]: time="2025-05-17T00:35:22.871081459Z" level=info msg="StartContainer for \"171907a69a0747fd913f85b19d412150dce04eafe284a73c04979b769360cb66\"" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.830 [WARNING][5731] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" WorkloadEndpoint="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.831 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.831 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" iface="eth0" netns="" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.831 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.831 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.855 [INFO][5741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.855 [INFO][5741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.856 [INFO][5741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.863 [WARNING][5741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.863 [INFO][5741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" HandleID="k8s-pod-network.bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" Workload="localhost-k8s-whisker--5d98bcff46--jh685-eth0" May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.864 [INFO][5741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.871580 containerd[1461]: 2025-05-17 00:35:22.867 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a" May 17 00:35:22.871838 containerd[1461]: time="2025-05-17T00:35:22.871620984Z" level=info msg="TearDown network for sandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\" successfully" May 17 00:35:22.876341 containerd[1461]: time="2025-05-17T00:35:22.876295537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:35:22.876456 containerd[1461]: time="2025-05-17T00:35:22.876360884Z" level=info msg="RemovePodSandbox \"bafe6215b70229559c0932a38211666b486ae156ceba6abe508e3539cd1e579a\" returns successfully" May 17 00:35:22.915828 systemd[1]: Started cri-containerd-171907a69a0747fd913f85b19d412150dce04eafe284a73c04979b769360cb66.scope - libcontainer container 171907a69a0747fd913f85b19d412150dce04eafe284a73c04979b769360cb66. May 17 00:35:22.951655 containerd[1461]: time="2025-05-17T00:35:22.950878269Z" level=info msg="StartContainer for \"171907a69a0747fd913f85b19d412150dce04eafe284a73c04979b769360cb66\" returns successfully" May 17 00:35:23.012069 kubelet[2510]: I0517 00:35:23.012024 2510 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:35:23.012069 kubelet[2510]: I0517 00:35:23.012061 2510 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:35:23.618092 kubelet[2510]: I0517 00:35:23.618026 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x8pqj" podStartSLOduration=30.10878909 podStartE2EDuration="42.618009321s" podCreationTimestamp="2025-05-17 00:34:41 +0000 UTC" firstStartedPulling="2025-05-17 00:35:10.236633491 +0000 UTC m=+49.375641774" lastFinishedPulling="2025-05-17 00:35:22.745853722 +0000 UTC m=+61.884862005" observedRunningTime="2025-05-17 00:35:23.61747613 +0000 UTC m=+62.756484413" watchObservedRunningTime="2025-05-17 00:35:23.618009321 +0000 UTC m=+62.757017604" May 17 00:35:23.938973 containerd[1461]: time="2025-05-17T00:35:23.938798867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:35:24.214490 containerd[1461]: time="2025-05-17T00:35:24.214345849Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:24.255651 containerd[1461]: time="2025-05-17T00:35:24.255587249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:35:24.255651 containerd[1461]: time="2025-05-17T00:35:24.255616867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:24.255944 kubelet[2510]: E0517 00:35:24.255891 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:24.256316 kubelet[2510]: E0517 00:35:24.255951 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:24.256316 kubelet[2510]: E0517 00:35:24.256076 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7aa13eb9e554e8d87b7837efa2e20d7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b597d656-vs877_calico-system(abd949cd-2e01-4075-875a-35887707269d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:24.258147 containerd[1461]: time="2025-05-17T00:35:24.258121168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:35:24.538601 containerd[1461]: time="2025-05-17T00:35:24.537891495Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:24.539600 containerd[1461]: time="2025-05-17T00:35:24.539561534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:24.539699 containerd[1461]: time="2025-05-17T00:35:24.539604707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:35:24.539815 kubelet[2510]: E0517 00:35:24.539771 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:24.540357 kubelet[2510]: E0517 00:35:24.539819 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:24.540357 kubelet[2510]: E0517 00:35:24.539945 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b597d656-vs877_calico-system(abd949cd-2e01-4075-875a-35887707269d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:24.541233 kubelet[2510]: E0517 00:35:24.541155 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-64b597d656-vs877" podUID="abd949cd-2e01-4075-875a-35887707269d" May 17 00:35:25.135185 systemd[1]: Started sshd@9-10.0.0.5:22-10.0.0.1:34054.service - OpenSSH per-connection server daemon (10.0.0.1:34054). May 17 00:35:25.216505 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 34054 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:25.215768 sshd[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:25.221110 systemd-logind[1447]: New session 10 of user core. May 17 00:35:25.225675 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:35:25.437367 sshd[5787]: pam_unix(sshd:session): session closed for user core May 17 00:35:25.442939 systemd[1]: sshd@9-10.0.0.5:22-10.0.0.1:34054.service: Deactivated successfully. May 17 00:35:25.445139 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:35:25.445825 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. May 17 00:35:25.446891 systemd-logind[1447]: Removed session 10. May 17 00:35:25.938652 containerd[1461]: time="2025-05-17T00:35:25.938603824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:35:26.162922 containerd[1461]: time="2025-05-17T00:35:26.162842558Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:26.164065 containerd[1461]: time="2025-05-17T00:35:26.164015032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:26.164139 containerd[1461]: time="2025-05-17T00:35:26.164070208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:35:26.164348 kubelet[2510]: E0517 00:35:26.164292 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:26.164829 kubelet[2510]: E0517 00:35:26.164354 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:26.164829 kubelet[2510]: E0517 00:35:26.164492 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4w9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-vss2s_calico-system(1cf92987-bd0b-472f-a9b0-2d45c7497558): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:26.165778 kubelet[2510]: E0517 00:35:26.165722 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:35:30.459153 systemd[1]: Started sshd@10-10.0.0.5:22-10.0.0.1:34064.service - OpenSSH per-connection server daemon (10.0.0.1:34064). May 17 00:35:30.513679 sshd[5804]: Accepted publickey for core from 10.0.0.1 port 34064 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:30.515719 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:30.520254 systemd-logind[1447]: New session 11 of user core. May 17 00:35:30.526692 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:35:30.660559 sshd[5804]: pam_unix(sshd:session): session closed for user core May 17 00:35:30.670897 systemd[1]: sshd@10-10.0.0.5:22-10.0.0.1:34064.service: Deactivated successfully. May 17 00:35:30.672759 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:35:30.674500 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. May 17 00:35:30.682093 systemd[1]: Started sshd@11-10.0.0.5:22-10.0.0.1:34066.service - OpenSSH per-connection server daemon (10.0.0.1:34066). May 17 00:35:30.683437 systemd-logind[1447]: Removed session 11. May 17 00:35:30.718395 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 34066 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:30.720009 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:30.724646 systemd-logind[1447]: New session 12 of user core. May 17 00:35:30.731682 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:35:30.891290 sshd[5820]: pam_unix(sshd:session): session closed for user core May 17 00:35:30.901122 systemd[1]: sshd@11-10.0.0.5:22-10.0.0.1:34066.service: Deactivated successfully. May 17 00:35:30.905344 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:35:30.908924 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. May 17 00:35:30.920089 systemd[1]: Started sshd@12-10.0.0.5:22-10.0.0.1:34074.service - OpenSSH per-connection server daemon (10.0.0.1:34074). May 17 00:35:30.921352 systemd-logind[1447]: Removed session 12. May 17 00:35:30.956324 sshd[5832]: Accepted publickey for core from 10.0.0.1 port 34074 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:30.958080 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:30.962290 systemd-logind[1447]: New session 13 of user core. May 17 00:35:30.973708 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:35:31.087985 sshd[5832]: pam_unix(sshd:session): session closed for user core May 17 00:35:31.092279 systemd[1]: sshd@12-10.0.0.5:22-10.0.0.1:34074.service: Deactivated successfully. May 17 00:35:31.094362 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:35:31.095363 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. May 17 00:35:31.096239 systemd-logind[1447]: Removed session 13. May 17 00:35:34.937849 kubelet[2510]: E0517 00:35:34.937790 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:36.101527 systemd[1]: Started sshd@13-10.0.0.5:22-10.0.0.1:33912.service - OpenSSH per-connection server daemon (10.0.0.1:33912). May 17 00:35:36.143495 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 33912 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:36.145498 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:36.150780 systemd-logind[1447]: New session 14 of user core. May 17 00:35:36.159778 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:35:36.284265 sshd[5858]: pam_unix(sshd:session): session closed for user core May 17 00:35:36.288888 systemd[1]: sshd@13-10.0.0.5:22-10.0.0.1:33912.service: Deactivated successfully. May 17 00:35:36.291095 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:35:36.291795 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. May 17 00:35:36.292881 systemd-logind[1447]: Removed session 14. May 17 00:35:37.938145 kubelet[2510]: E0517 00:35:37.938069 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-64b597d656-vs877" podUID="abd949cd-2e01-4075-875a-35887707269d" May 17 00:35:38.938782 kubelet[2510]: E0517 00:35:38.938499 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:35:41.300400 systemd[1]: Started sshd@14-10.0.0.5:22-10.0.0.1:33914.service - OpenSSH per-connection server daemon (10.0.0.1:33914). May 17 00:35:41.338657 sshd[5894]: Accepted publickey for core from 10.0.0.1 port 33914 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:41.340350 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:41.344645 systemd-logind[1447]: New session 15 of user core. May 17 00:35:41.354801 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:35:41.464861 sshd[5894]: pam_unix(sshd:session): session closed for user core May 17 00:35:41.469243 systemd[1]: sshd@14-10.0.0.5:22-10.0.0.1:33914.service: Deactivated successfully. May 17 00:35:41.471429 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:35:41.472326 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. May 17 00:35:41.473417 systemd-logind[1447]: Removed session 15. May 17 00:35:45.937265 kubelet[2510]: E0517 00:35:45.937217 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:46.480995 systemd[1]: Started sshd@15-10.0.0.5:22-10.0.0.1:49716.service - OpenSSH per-connection server daemon (10.0.0.1:49716). May 17 00:35:46.536201 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 49716 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:46.538133 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:46.542895 systemd-logind[1447]: New session 16 of user core. May 17 00:35:46.555727 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:35:46.710714 sshd[5909]: pam_unix(sshd:session): session closed for user core May 17 00:35:46.714503 systemd[1]: sshd@15-10.0.0.5:22-10.0.0.1:49716.service: Deactivated successfully. May 17 00:35:46.716559 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:35:46.717152 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. May 17 00:35:46.718029 systemd-logind[1447]: Removed session 16. May 17 00:35:49.938257 containerd[1461]: time="2025-05-17T00:35:49.938076989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:35:50.228980 containerd[1461]: time="2025-05-17T00:35:50.228746247Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:50.230183 containerd[1461]: time="2025-05-17T00:35:50.230133209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:50.230339 containerd[1461]: time="2025-05-17T00:35:50.230169929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:35:50.230434 kubelet[2510]: E0517 00:35:50.230375 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:50.230899 kubelet[2510]: E0517 00:35:50.230433 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:50.230899 kubelet[2510]: E0517 00:35:50.230586 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4w9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-vss2s_calico-system(1cf92987-bd0b-472f-a9b0-2d45c7497558): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:50.231873 kubelet[2510]: E0517 00:35:50.231790 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:35:50.938056 kubelet[2510]: E0517 00:35:50.938013 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:51.726718 systemd[1]: Started sshd@16-10.0.0.5:22-10.0.0.1:49724.service - OpenSSH per-connection server daemon (10.0.0.1:49724). May 17 00:35:51.796577 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 49724 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:51.798431 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:51.803036 systemd-logind[1447]: New session 17 of user core. May 17 00:35:51.810767 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:35:51.987709 sshd[5942]: pam_unix(sshd:session): session closed for user core May 17 00:35:51.991471 systemd[1]: sshd@16-10.0.0.5:22-10.0.0.1:49724.service: Deactivated successfully. May 17 00:35:51.993520 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:35:51.994181 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. May 17 00:35:51.995052 systemd-logind[1447]: Removed session 17. May 17 00:35:52.938148 containerd[1461]: time="2025-05-17T00:35:52.937898188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:35:53.181651 containerd[1461]: time="2025-05-17T00:35:53.181570460Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:53.182980 containerd[1461]: time="2025-05-17T00:35:53.182928514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:53.183048 containerd[1461]: time="2025-05-17T00:35:53.182999099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:35:53.183984 kubelet[2510]: E0517 00:35:53.183930 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:53.184525 kubelet[2510]: E0517 00:35:53.183989 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:53.184525 kubelet[2510]: E0517 00:35:53.184107 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7aa13eb9e554e8d87b7837efa2e20d7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b597d656-vs877_calico-system(abd949cd-2e01-4075-875a-35887707269d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:53.186211 containerd[1461]: time="2025-05-17T00:35:53.186186345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:35:53.433556 containerd[1461]: time="2025-05-17T00:35:53.433464880Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:53.434636 containerd[1461]: time="2025-05-17T00:35:53.434576205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:53.434636 containerd[1461]: time="2025-05-17T00:35:53.434605571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:35:53.434857 kubelet[2510]: E0517 00:35:53.434810 2510 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:53.434941 kubelet[2510]: E0517 00:35:53.434864 2510 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:53.435025 kubelet[2510]: E0517 00:35:53.434979 2510 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64b597d656-vs877_calico-system(abd949cd-2e01-4075-875a-35887707269d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:53.436211 kubelet[2510]: E0517 00:35:53.436157 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-64b597d656-vs877" podUID="abd949cd-2e01-4075-875a-35887707269d" May 17 00:35:54.937955 kubelet[2510]: E0517 00:35:54.937907 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:57.006562 systemd[1]: Started sshd@17-10.0.0.5:22-10.0.0.1:44690.service - OpenSSH per-connection server daemon (10.0.0.1:44690). May 17 00:35:57.106148 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 44690 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:57.111629 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:57.118300 systemd-logind[1447]: New session 18 of user core. May 17 00:35:57.124912 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:35:57.303687 sshd[5984]: pam_unix(sshd:session): session closed for user core May 17 00:35:57.316114 systemd[1]: sshd@17-10.0.0.5:22-10.0.0.1:44690.service: Deactivated successfully. May 17 00:35:57.318622 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:35:57.320850 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. May 17 00:35:57.330818 systemd[1]: Started sshd@18-10.0.0.5:22-10.0.0.1:44704.service - OpenSSH per-connection server daemon (10.0.0.1:44704). May 17 00:35:57.332281 systemd-logind[1447]: Removed session 18. May 17 00:35:57.397791 sshd[5998]: Accepted publickey for core from 10.0.0.1 port 44704 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:57.400594 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:57.411100 systemd-logind[1447]: New session 19 of user core. May 17 00:35:57.420826 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:35:57.859175 sshd[5998]: pam_unix(sshd:session): session closed for user core May 17 00:35:57.870733 systemd[1]: sshd@18-10.0.0.5:22-10.0.0.1:44704.service: Deactivated successfully. May 17 00:35:57.873390 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:35:57.879117 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. May 17 00:35:57.885943 systemd[1]: Started sshd@19-10.0.0.5:22-10.0.0.1:44714.service - OpenSSH per-connection server daemon (10.0.0.1:44714). May 17 00:35:57.887627 systemd-logind[1447]: Removed session 19. May 17 00:35:57.978912 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 44714 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:57.989195 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:57.999959 systemd-logind[1447]: New session 20 of user core. May 17 00:35:58.008820 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:35:58.780362 sshd[6011]: pam_unix(sshd:session): session closed for user core May 17 00:35:58.790667 systemd[1]: sshd@19-10.0.0.5:22-10.0.0.1:44714.service: Deactivated successfully. May 17 00:35:58.793190 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:35:58.797721 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. May 17 00:35:58.808058 systemd[1]: Started sshd@20-10.0.0.5:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). May 17 00:35:58.809419 systemd-logind[1447]: Removed session 20. May 17 00:35:58.849320 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:58.851485 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:58.855811 systemd-logind[1447]: New session 21 of user core. May 17 00:35:58.864720 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:35:59.312444 sshd[6032]: pam_unix(sshd:session): session closed for user core May 17 00:35:59.324560 systemd[1]: sshd@20-10.0.0.5:22-10.0.0.1:44730.service: Deactivated successfully. May 17 00:35:59.326307 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:35:59.327842 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. May 17 00:35:59.329326 systemd[1]: Started sshd@21-10.0.0.5:22-10.0.0.1:44732.service - OpenSSH per-connection server daemon (10.0.0.1:44732). May 17 00:35:59.330357 systemd-logind[1447]: Removed session 21. May 17 00:35:59.379148 sshd[6044]: Accepted publickey for core from 10.0.0.1 port 44732 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:35:59.380903 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:35:59.385154 systemd-logind[1447]: New session 22 of user core. May 17 00:35:59.392761 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:35:59.532713 sshd[6044]: pam_unix(sshd:session): session closed for user core May 17 00:35:59.536821 systemd[1]: sshd@21-10.0.0.5:22-10.0.0.1:44732.service: Deactivated successfully. May 17 00:35:59.539838 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:35:59.540600 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. May 17 00:35:59.541442 systemd-logind[1447]: Removed session 22. May 17 00:36:02.938988 kubelet[2510]: E0517 00:36:02.938787 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:36:04.548695 systemd[1]: Started sshd@22-10.0.0.5:22-10.0.0.1:54928.service - OpenSSH per-connection server daemon (10.0.0.1:54928). May 17 00:36:04.589560 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 54928 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:36:04.591359 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:04.596096 systemd-logind[1447]: New session 23 of user core. May 17 00:36:04.606806 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:36:04.736997 sshd[6058]: pam_unix(sshd:session): session closed for user core May 17 00:36:04.742752 systemd[1]: sshd@22-10.0.0.5:22-10.0.0.1:54928.service: Deactivated successfully. May 17 00:36:04.745051 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:36:04.745948 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. May 17 00:36:04.747091 systemd-logind[1447]: Removed session 23. May 17 00:36:05.938826 kubelet[2510]: E0517 00:36:05.938701 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-64b597d656-vs877" podUID="abd949cd-2e01-4075-875a-35887707269d" May 17 00:36:09.759043 systemd[1]: Started sshd@23-10.0.0.5:22-10.0.0.1:54938.service - OpenSSH per-connection server daemon (10.0.0.1:54938). May 17 00:36:09.802960 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 54938 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:36:09.805558 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:09.812360 systemd-logind[1447]: New session 24 of user core. May 17 00:36:09.817278 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:36:10.017400 sshd[6095]: pam_unix(sshd:session): session closed for user core May 17 00:36:10.021981 systemd[1]: sshd@23-10.0.0.5:22-10.0.0.1:54938.service: Deactivated successfully. May 17 00:36:10.024375 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:36:10.026591 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. May 17 00:36:10.027711 systemd-logind[1447]: Removed session 24. May 17 00:36:14.938002 kubelet[2510]: E0517 00:36:14.937711 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-vss2s" podUID="1cf92987-bd0b-472f-a9b0-2d45c7497558" May 17 00:36:15.032578 systemd[1]: Started sshd@24-10.0.0.5:22-10.0.0.1:53738.service - OpenSSH per-connection server daemon (10.0.0.1:53738). May 17 00:36:15.083986 sshd[6111]: Accepted publickey for core from 10.0.0.1 port 53738 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:36:15.086156 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:15.100512 systemd-logind[1447]: New session 25 of user core. May 17 00:36:15.103994 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:36:15.283329 sshd[6111]: pam_unix(sshd:session): session closed for user core May 17 00:36:15.287467 systemd[1]: sshd@24-10.0.0.5:22-10.0.0.1:53738.service: Deactivated successfully. May 17 00:36:15.289512 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:36:15.291617 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. May 17 00:36:15.292768 systemd-logind[1447]: Removed session 25. May 17 00:36:15.937457 kubelet[2510]: E0517 00:36:15.937410 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:36:17.939222 kubelet[2510]: E0517 00:36:17.939166 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-64b597d656-vs877" podUID="abd949cd-2e01-4075-875a-35887707269d" May 17 00:36:20.297320 systemd[1]: Started sshd@25-10.0.0.5:22-10.0.0.1:53752.service - OpenSSH per-connection server daemon (10.0.0.1:53752). May 17 00:36:20.354793 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 53752 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:36:20.356860 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:20.361474 systemd-logind[1447]: New session 26 of user core. May 17 00:36:20.373877 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:36:20.500419 sshd[6144]: pam_unix(sshd:session): session closed for user core May 17 00:36:20.503942 systemd[1]: sshd@25-10.0.0.5:22-10.0.0.1:53752.service: Deactivated successfully. May 17 00:36:20.506245 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:36:20.508314 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. May 17 00:36:20.509432 systemd-logind[1447]: Removed session 26. May 17 00:36:25.515650 systemd[1]: Started sshd@26-10.0.0.5:22-10.0.0.1:43282.service - OpenSSH per-connection server daemon (10.0.0.1:43282). May 17 00:36:25.556557 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 43282 ssh2: RSA SHA256:q3rGW/yc1xqbcktdrAruCxPdIePdY4QS4w60a1ZXxbc May 17 00:36:25.558451 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:25.562943 systemd-logind[1447]: New session 27 of user core. May 17 00:36:25.573793 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:36:25.760114 sshd[6160]: pam_unix(sshd:session): session closed for user core May 17 00:36:25.765463 systemd[1]: sshd@26-10.0.0.5:22-10.0.0.1:43282.service: Deactivated successfully. May 17 00:36:25.768176 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:36:25.769476 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. May 17 00:36:25.770585 systemd-logind[1447]: Removed session 27.