May 17 00:16:18.876091 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:16:18.876118 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:16:18.876130 kernel: BIOS-provided physical RAM map: May 17 00:16:18.876138 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:16:18.876146 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:16:18.876155 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:16:18.876164 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:16:18.876173 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:16:18.876181 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:16:18.876192 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:16:18.876201 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:16:18.876208 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:16:18.876214 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:16:18.876221 kernel: NX (Execute Disable) protection: active May 17 00:16:18.876228 kernel: APIC: Static calls initialized May 17 00:16:18.876239 kernel: SMBIOS 2.8 present. May 17 00:16:18.876249 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:16:18.876257 kernel: Hypervisor detected: KVM May 17 00:16:18.876263 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:16:18.876271 kernel: kvm-clock: using sched offset of 2202436701 cycles May 17 00:16:18.876281 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:16:18.876290 kernel: tsc: Detected 2794.748 MHz processor May 17 00:16:18.876299 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:16:18.876309 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:16:18.876318 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:16:18.876331 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:16:18.876338 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:16:18.876344 kernel: Using GB pages for direct mapping May 17 00:16:18.876351 kernel: ACPI: Early table checksum verification disabled May 17 00:16:18.876358 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:16:18.876365 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:18.876372 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:18.876379 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:18.876388 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:16:18.876395 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:18.876402 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:18.876408 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:18.876415 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:18.876424 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:16:18.876433 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:16:18.876443 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:16:18.876452 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:16:18.876459 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:16:18.876467 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:16:18.876474 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:16:18.876481 kernel: No NUMA configuration found May 17 00:16:18.876522 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:16:18.876531 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:16:18.876543 kernel: Zone ranges: May 17 00:16:18.876550 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:16:18.876557 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:16:18.876564 kernel: Normal empty May 17 00:16:18.876571 kernel: Movable zone start for each node May 17 00:16:18.876578 kernel: Early memory node ranges May 17 00:16:18.876586 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:16:18.876603 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:16:18.876613 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:16:18.876622 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:16:18.876629 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:16:18.876636 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:16:18.876644 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:16:18.876651 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:16:18.876658 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:16:18.876665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:16:18.876672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:16:18.876679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:16:18.876689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:16:18.876696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:16:18.876703 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:16:18.876710 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:16:18.876719 kernel: TSC deadline timer available May 17 00:16:18.876728 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:16:18.876735 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:16:18.876742 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:16:18.876749 kernel: kvm-guest: setup PV sched yield May 17 00:16:18.876757 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:16:18.876766 kernel: Booting paravirtualized kernel on KVM May 17 00:16:18.876773 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:16:18.876781 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 17 00:16:18.876788 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 17 00:16:18.876795 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 17 00:16:18.876802 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:16:18.876811 kernel: kvm-guest: PV spinlocks enabled May 17 00:16:18.876820 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:16:18.876828 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:16:18.876839 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:16:18.876849 kernel: random: crng init done May 17 00:16:18.876856 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:16:18.876863 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:16:18.876870 kernel: Fallback order for Node 0: 0 May 17 00:16:18.876878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:16:18.876885 kernel: Policy zone: DMA32 May 17 00:16:18.876892 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:16:18.876901 kernel: Memory: 2434596K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 136896K reserved, 0K cma-reserved) May 17 00:16:18.876909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:16:18.876916 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:16:18.876923 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:16:18.876930 kernel: Dynamic Preempt: voluntary May 17 00:16:18.876937 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:16:18.876945 kernel: rcu: RCU event tracing is enabled. May 17 00:16:18.876953 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:16:18.876963 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:16:18.876973 kernel: Rude variant of Tasks RCU enabled. May 17 00:16:18.876980 kernel: Tracing variant of Tasks RCU enabled. May 17 00:16:18.876987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:16:18.876994 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:16:18.877001 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:16:18.877008 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:16:18.877015 kernel: Console: colour VGA+ 80x25 May 17 00:16:18.877023 kernel: printk: console [ttyS0] enabled May 17 00:16:18.877030 kernel: ACPI: Core revision 20230628 May 17 00:16:18.877041 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:16:18.877051 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:16:18.877062 kernel: x2apic enabled May 17 00:16:18.877070 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:16:18.877077 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:16:18.877084 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:16:18.877091 kernel: kvm-guest: setup PV IPIs May 17 00:16:18.877110 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:16:18.877119 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:16:18.877126 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:16:18.877134 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:16:18.877141 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:16:18.877151 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:16:18.877158 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:16:18.877166 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:16:18.877174 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:16:18.877181 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:16:18.877191 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:16:18.877198 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:16:18.877206 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:16:18.877217 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:16:18.877225 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:16:18.877233 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:16:18.877240 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:16:18.877248 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:16:18.877258 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:16:18.877265 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:16:18.877273 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:16:18.877280 kernel: Freeing SMP alternatives memory: 32K May 17 00:16:18.877287 kernel: pid_max: default: 32768 minimum: 301 May 17 00:16:18.877295 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:16:18.877302 kernel: landlock: Up and running. May 17 00:16:18.877309 kernel: SELinux: Initializing. May 17 00:16:18.877317 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:16:18.877326 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:16:18.877334 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:16:18.877341 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:16:18.877349 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:16:18.877357 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:16:18.877364 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:16:18.877371 kernel: ... version: 0 May 17 00:16:18.877379 kernel: ... bit width: 48 May 17 00:16:18.877386 kernel: ... generic registers: 6 May 17 00:16:18.877396 kernel: ... value mask: 0000ffffffffffff May 17 00:16:18.877403 kernel: ... max period: 00007fffffffffff May 17 00:16:18.877410 kernel: ... fixed-purpose events: 0 May 17 00:16:18.877418 kernel: ... event mask: 000000000000003f May 17 00:16:18.877425 kernel: signal: max sigframe size: 1776 May 17 00:16:18.877432 kernel: rcu: Hierarchical SRCU implementation. May 17 00:16:18.877440 kernel: rcu: Max phase no-delay instances is 400. May 17 00:16:18.877447 kernel: smp: Bringing up secondary CPUs ... May 17 00:16:18.877455 kernel: smpboot: x86: Booting SMP configuration: May 17 00:16:18.877464 kernel: .... node #0, CPUs: #1 #2 #3 May 17 00:16:18.877472 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:16:18.877479 kernel: smpboot: Max logical packages: 1 May 17 00:16:18.877498 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:16:18.877506 kernel: devtmpfs: initialized May 17 00:16:18.877513 kernel: x86/mm: Memory block size: 128MB May 17 00:16:18.877520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:16:18.877528 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:16:18.877535 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:16:18.877545 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:16:18.877553 kernel: audit: initializing netlink subsys (disabled) May 17 00:16:18.877560 kernel: audit: type=2000 audit(1747440979.130:1): state=initialized audit_enabled=0 res=1 May 17 00:16:18.877568 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:16:18.877585 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:16:18.877598 kernel: cpuidle: using governor menu May 17 00:16:18.877607 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:16:18.877614 kernel: dca service started, version 1.12.1 May 17 00:16:18.877629 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:16:18.877640 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:16:18.877662 kernel: PCI: Using configuration type 1 for base access May 17 00:16:18.877678 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:16:18.877693 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:16:18.877709 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:16:18.877730 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:16:18.877746 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:16:18.877754 kernel: ACPI: Added _OSI(Module Device) May 17 00:16:18.877775 kernel: ACPI: Added _OSI(Processor Device) May 17 00:16:18.877787 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:16:18.877794 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:16:18.877802 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:16:18.877809 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:16:18.877817 kernel: ACPI: Interpreter enabled May 17 00:16:18.877824 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:16:18.877831 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:16:18.877839 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:16:18.877846 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:16:18.877856 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:16:18.877863 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:16:18.878047 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:16:18.878191 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:16:18.878317 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:16:18.878327 kernel: PCI host bridge to bus 0000:00 May 17 00:16:18.878449 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:16:18.878585 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:16:18.878706 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:16:18.878814 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:16:18.878934 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:16:18.879124 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:16:18.879258 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:16:18.879439 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:16:18.879603 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:16:18.879726 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:16:18.879846 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:16:18.879963 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:16:18.880113 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:16:18.880249 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:16:18.880379 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:16:18.880563 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:16:18.880711 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:16:18.880854 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:16:18.880990 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:16:18.881117 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:16:18.881243 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:16:18.881412 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:16:18.881578 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:16:18.881712 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:16:18.881832 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:16:18.881952 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:16:18.882086 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:16:18.882205 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:16:18.882337 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:16:18.882457 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:16:18.882609 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:16:18.882758 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:16:18.882878 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:16:18.882889 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:16:18.882897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:16:18.882908 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:16:18.882916 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:16:18.882923 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:16:18.882931 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:16:18.882939 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:16:18.882946 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:16:18.882954 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:16:18.882961 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:16:18.882969 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:16:18.882979 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:16:18.882986 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:16:18.882994 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:16:18.883001 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:16:18.883009 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:16:18.883016 kernel: iommu: Default domain type: Translated May 17 00:16:18.883024 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:16:18.883031 kernel: PCI: Using ACPI for IRQ routing May 17 00:16:18.883039 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:16:18.883048 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:16:18.883056 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:16:18.883175 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:16:18.883297 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:16:18.883416 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:16:18.883426 kernel: vgaarb: loaded May 17 00:16:18.883434 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:16:18.883442 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:16:18.883452 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:16:18.883460 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:16:18.883468 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:16:18.883475 kernel: pnp: PnP ACPI init May 17 00:16:18.883628 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:16:18.883640 kernel: pnp: PnP ACPI: found 6 devices May 17 00:16:18.883648 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:16:18.883655 kernel: NET: Registered PF_INET protocol family May 17 00:16:18.883666 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:16:18.883674 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:16:18.883682 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:16:18.883689 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:16:18.883697 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:16:18.883704 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:16:18.883712 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:16:18.883720 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:16:18.883727 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:16:18.883737 kernel: NET: Registered PF_XDP protocol family May 17 00:16:18.883848 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:16:18.883972 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:16:18.884087 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:16:18.884209 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:16:18.884324 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:16:18.884438 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:16:18.884451 kernel: PCI: CLS 0 bytes, default 64 May 17 00:16:18.884462 kernel: Initialise system trusted keyrings May 17 00:16:18.884470 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:16:18.884478 kernel: Key type asymmetric registered May 17 00:16:18.884501 kernel: Asymmetric key parser 'x509' registered May 17 00:16:18.884509 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:16:18.884520 kernel: io scheduler mq-deadline registered May 17 00:16:18.884528 kernel: io scheduler kyber registered May 17 00:16:18.884536 kernel: io scheduler bfq registered May 17 00:16:18.884543 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:16:18.884551 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:16:18.884562 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:16:18.884570 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:16:18.884577 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:16:18.884585 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:16:18.884603 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:16:18.884612 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:16:18.884620 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:16:18.884751 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:16:18.884770 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:16:18.884887 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:16:18.885005 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:16:18 UTC (1747440978) May 17 00:16:18.885125 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:16:18.885136 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:16:18.885143 kernel: NET: Registered PF_INET6 protocol family May 17 00:16:18.885151 kernel: Segment Routing with IPv6 May 17 00:16:18.885158 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:16:18.885170 kernel: NET: Registered PF_PACKET protocol family May 17 00:16:18.885178 kernel: Key type dns_resolver registered May 17 00:16:18.885188 kernel: IPI shorthand broadcast: enabled May 17 00:16:18.885197 kernel: sched_clock: Marking stable (552002230, 104432901)->(704950636, -48515505) May 17 00:16:18.885205 kernel: registered taskstats version 1 May 17 00:16:18.885212 kernel: Loading compiled-in X.509 certificates May 17 00:16:18.885220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:16:18.885227 kernel: Key type .fscrypt registered May 17 00:16:18.885235 kernel: Key type fscrypt-provisioning registered May 17 00:16:18.885246 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:16:18.885256 kernel: ima: Allocated hash algorithm: sha1 May 17 00:16:18.885263 kernel: ima: No architecture policies found May 17 00:16:18.885271 kernel: clk: Disabling unused clocks May 17 00:16:18.885278 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:16:18.885286 kernel: Write protecting the kernel read-only data: 36864k May 17 00:16:18.885293 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:16:18.885301 kernel: Run /init as init process May 17 00:16:18.885308 kernel: with arguments: May 17 00:16:18.885318 kernel: /init May 17 00:16:18.885327 kernel: with environment: May 17 00:16:18.885336 kernel: HOME=/ May 17 00:16:18.885344 kernel: TERM=linux May 17 00:16:18.885351 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:16:18.885360 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:16:18.885370 systemd[1]: Detected virtualization kvm. May 17 00:16:18.885378 systemd[1]: Detected architecture x86-64. May 17 00:16:18.885388 systemd[1]: Running in initrd. May 17 00:16:18.885396 systemd[1]: No hostname configured, using default hostname. May 17 00:16:18.885404 systemd[1]: Hostname set to . May 17 00:16:18.885412 systemd[1]: Initializing machine ID from VM UUID. May 17 00:16:18.885421 systemd[1]: Queued start job for default target initrd.target. May 17 00:16:18.885432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:16:18.885441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:16:18.885449 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:16:18.885460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:16:18.885480 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:16:18.885512 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:16:18.885525 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:16:18.885534 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:16:18.885545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:16:18.885553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:16:18.885561 systemd[1]: Reached target paths.target - Path Units. May 17 00:16:18.885570 systemd[1]: Reached target slices.target - Slice Units. May 17 00:16:18.885579 systemd[1]: Reached target swap.target - Swaps. May 17 00:16:18.885597 systemd[1]: Reached target timers.target - Timer Units. May 17 00:16:18.885605 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:16:18.885615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:16:18.885626 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:16:18.885634 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:16:18.885642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:16:18.885650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:16:18.885659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:16:18.885667 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:16:18.885678 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:16:18.885688 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:16:18.885696 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:16:18.885706 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:16:18.885716 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:16:18.885727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:16:18.885735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:18.885744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:16:18.885752 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:16:18.885760 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:16:18.885789 systemd-journald[192]: Collecting audit messages is disabled. May 17 00:16:18.885811 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:16:18.885822 systemd-journald[192]: Journal started May 17 00:16:18.885843 systemd-journald[192]: Runtime Journal (/run/log/journal/5f22147f71b34f0c8b56b174584e68cf) is 6.0M, max 48.4M, 42.3M free. May 17 00:16:18.879405 systemd-modules-load[193]: Inserted module 'overlay' May 17 00:16:18.889394 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:16:18.890113 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:16:18.891989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:16:18.901718 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:16:18.935047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:16:18.935077 kernel: Bridge firewalling registered May 17 00:16:18.908550 systemd-modules-load[193]: Inserted module 'br_netfilter' May 17 00:16:18.949704 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:16:18.951231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:18.953285 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:16:18.963712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:16:18.964747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:16:18.965428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:16:18.977637 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:16:18.986681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:16:18.990338 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:18.991618 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:16:19.007535 dracut-cmdline[230]: dracut-dracut-053 May 17 00:16:19.010429 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:16:19.015703 systemd-resolved[222]: Positive Trust Anchors: May 17 00:16:19.015720 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:16:19.015752 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:16:19.018238 systemd-resolved[222]: Defaulting to hostname 'linux'. May 17 00:16:19.019289 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:16:19.025340 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:16:19.100521 kernel: SCSI subsystem initialized May 17 00:16:19.109515 kernel: Loading iSCSI transport class v2.0-870. May 17 00:16:19.119512 kernel: iscsi: registered transport (tcp) May 17 00:16:19.140527 kernel: iscsi: registered transport (qla4xxx) May 17 00:16:19.140608 kernel: QLogic iSCSI HBA Driver May 17 00:16:19.192613 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:16:19.201710 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:16:19.226338 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:16:19.226408 kernel: device-mapper: uevent: version 1.0.3 May 17 00:16:19.226420 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:16:19.268510 kernel: raid6: avx2x4 gen() 30174 MB/s May 17 00:16:19.285510 kernel: raid6: avx2x2 gen() 29961 MB/s May 17 00:16:19.302592 kernel: raid6: avx2x1 gen() 25992 MB/s May 17 00:16:19.302611 kernel: raid6: using algorithm avx2x4 gen() 30174 MB/s May 17 00:16:19.320590 kernel: raid6: .... xor() 7956 MB/s, rmw enabled May 17 00:16:19.320610 kernel: raid6: using avx2x2 recovery algorithm May 17 00:16:19.341515 kernel: xor: automatically using best checksumming function avx May 17 00:16:19.496516 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:16:19.510524 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:16:19.523635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:16:19.537499 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 17 00:16:19.541966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:16:19.554663 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:16:19.569786 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 17 00:16:19.601980 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:16:19.613632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:16:19.674726 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:16:19.687627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:16:19.700256 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:16:19.702733 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:16:19.706557 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:16:19.707841 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:16:19.713545 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 17 00:16:19.716592 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:16:19.718690 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:16:19.726902 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:16:19.726945 kernel: GPT:9289727 != 19775487 May 17 00:16:19.726963 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:16:19.726973 kernel: GPT:9289727 != 19775487 May 17 00:16:19.726982 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:16:19.726992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:16:19.729526 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:16:19.730504 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:16:19.736578 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:16:19.736694 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:19.743027 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:16:19.745495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:16:19.745635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:19.746921 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:19.754506 kernel: libata version 3.00 loaded. May 17 00:16:19.754537 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:16:19.754549 kernel: AES CTR mode by8 optimization enabled May 17 00:16:19.760643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:19.764844 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) May 17 00:16:19.769531 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (465) May 17 00:16:19.770833 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:16:19.771023 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:16:19.773137 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:16:19.773301 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:16:19.782513 kernel: scsi host0: ahci May 17 00:16:19.783506 kernel: scsi host1: ahci May 17 00:16:19.783715 kernel: scsi host2: ahci May 17 00:16:19.784505 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:16:19.818886 kernel: scsi host3: ahci May 17 00:16:19.819146 kernel: scsi host4: ahci May 17 00:16:19.819294 kernel: scsi host5: ahci May 17 00:16:19.819439 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 17 00:16:19.819451 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 17 00:16:19.819461 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 17 00:16:19.819475 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 17 00:16:19.819485 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 17 00:16:19.819514 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 17 00:16:19.820311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:19.828183 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:16:19.838760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:16:19.844802 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:16:19.848073 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:16:19.863605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:16:19.866573 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:16:19.884532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:19.903043 disk-uuid[553]: Primary Header is updated. May 17 00:16:19.903043 disk-uuid[553]: Secondary Entries is updated. May 17 00:16:19.903043 disk-uuid[553]: Secondary Header is updated. May 17 00:16:19.907512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:16:19.911515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:16:20.096325 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:16:20.096374 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:16:20.096393 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:16:20.096508 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:16:20.097512 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:16:20.098514 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:16:20.099520 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:16:20.099536 kernel: ata3.00: applying bridge limits May 17 00:16:20.100535 kernel: ata3.00: configured for UDMA/100 May 17 00:16:20.101519 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:16:20.146505 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:16:20.146724 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:16:20.160524 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:16:20.912526 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:16:20.913008 disk-uuid[563]: The operation has completed successfully. May 17 00:16:20.943067 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:16:20.943188 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:16:20.964634 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:16:20.968228 sh[590]: Success May 17 00:16:20.981525 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:16:21.013143 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:16:21.024071 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:16:21.026857 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:16:21.038605 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:16:21.038634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:16:21.038645 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:16:21.039629 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:16:21.040972 kernel: BTRFS info (device dm-0): using free space tree May 17 00:16:21.044940 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:16:21.045743 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:16:21.053683 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:16:21.055342 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:16:21.063792 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:16:21.063823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:16:21.063834 kernel: BTRFS info (device vda6): using free space tree May 17 00:16:21.067520 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:16:21.076441 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:16:21.078584 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:16:21.088126 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:16:21.095714 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:16:21.150145 ignition[684]: Ignition 2.19.0 May 17 00:16:21.150854 ignition[684]: Stage: fetch-offline May 17 00:16:21.150916 ignition[684]: no configs at "/usr/lib/ignition/base.d" May 17 00:16:21.150927 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:16:21.151027 ignition[684]: parsed url from cmdline: "" May 17 00:16:21.151031 ignition[684]: no config URL provided May 17 00:16:21.151037 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:16:21.151045 ignition[684]: no config at "/usr/lib/ignition/user.ign" May 17 00:16:21.151072 ignition[684]: op(1): [started] loading QEMU firmware config module May 17 00:16:21.151078 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:16:21.160139 ignition[684]: op(1): [finished] loading QEMU firmware config module May 17 00:16:21.173946 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:16:21.193749 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:16:21.202624 ignition[684]: parsing config with SHA512: 1677ad4024614a1dca7c82d67f03fcef25e63c432518333528990e7687b03a4336388835afd0341738494d09d31f5a849972b91f5566f2719579c01e7eb4531b May 17 00:16:21.206332 unknown[684]: fetched base config from "system" May 17 00:16:21.206347 unknown[684]: fetched user config from "qemu" May 17 00:16:21.207448 ignition[684]: fetch-offline: fetch-offline passed May 17 00:16:21.207562 ignition[684]: Ignition finished successfully May 17 00:16:21.212282 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:16:21.216871 systemd-networkd[779]: lo: Link UP May 17 00:16:21.216882 systemd-networkd[779]: lo: Gained carrier May 17 00:16:21.218396 systemd-networkd[779]: Enumeration completed May 17 00:16:21.218478 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:16:21.218812 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:21.218816 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:16:21.220692 systemd[1]: Reached target network.target - Network. May 17 00:16:21.220899 systemd-networkd[779]: eth0: Link UP May 17 00:16:21.220903 systemd-networkd[779]: eth0: Gained carrier May 17 00:16:21.220910 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:21.222620 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:16:21.233635 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:16:21.243555 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:16:21.246941 ignition[782]: Ignition 2.19.0 May 17 00:16:21.246952 ignition[782]: Stage: kargs May 17 00:16:21.247107 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 17 00:16:21.247119 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:16:21.247999 ignition[782]: kargs: kargs passed May 17 00:16:21.248037 ignition[782]: Ignition finished successfully May 17 00:16:21.251141 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:16:21.263629 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:16:21.274175 ignition[791]: Ignition 2.19.0 May 17 00:16:21.274185 ignition[791]: Stage: disks May 17 00:16:21.274341 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 17 00:16:21.274351 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:16:21.275252 ignition[791]: disks: disks passed May 17 00:16:21.277468 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:16:21.275295 ignition[791]: Ignition finished successfully May 17 00:16:21.278934 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:16:21.280429 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:16:21.282599 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:16:21.283626 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:16:21.284031 systemd[1]: Reached target basic.target - Basic System. May 17 00:16:21.292621 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:16:21.305169 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:16:21.311586 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:16:21.325574 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:16:21.412506 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:16:21.412508 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:16:21.413937 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:16:21.422576 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:16:21.423872 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:16:21.425411 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:16:21.430608 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) May 17 00:16:21.430629 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:16:21.425448 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:16:21.437082 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:16:21.437097 kernel: BTRFS info (device vda6): using free space tree May 17 00:16:21.437108 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:16:21.425469 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:16:21.432105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:16:21.438130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:16:21.447634 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:16:21.479198 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:16:21.483733 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory May 17 00:16:21.488390 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:16:21.493319 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:16:21.556947 systemd-resolved[222]: Detected conflict on linux IN A 10.0.0.73 May 17 00:16:21.556961 systemd-resolved[222]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 17 00:16:21.579481 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:16:21.589586 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:16:21.591184 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:16:21.600505 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:16:21.615642 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:16:21.638850 ignition[927]: INFO : Ignition 2.19.0 May 17 00:16:21.638850 ignition[927]: INFO : Stage: mount May 17 00:16:21.640726 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:16:21.640726 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:16:21.640726 ignition[927]: INFO : mount: mount passed May 17 00:16:21.640726 ignition[927]: INFO : Ignition finished successfully May 17 00:16:21.646386 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:16:21.652605 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:16:22.038379 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:16:22.051718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:16:22.058505 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (936) May 17 00:16:22.060813 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:16:22.060834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:16:22.060845 kernel: BTRFS info (device vda6): using free space tree May 17 00:16:22.064509 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:16:22.065731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:16:22.089910 ignition[953]: INFO : Ignition 2.19.0 May 17 00:16:22.089910 ignition[953]: INFO : Stage: files May 17 00:16:22.091815 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:16:22.091815 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:16:22.091815 ignition[953]: DEBUG : files: compiled without relabeling support, skipping May 17 00:16:22.095370 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:16:22.095370 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:16:22.095370 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:16:22.095370 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:16:22.095370 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:16:22.094775 unknown[953]: wrote ssh authorized keys file for user: core May 17 00:16:22.103310 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:16:22.103310 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:16:22.103310 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:16:22.103310 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:16:22.151013 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:16:22.306907 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:16:22.306907 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:16:22.310966 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:16:22.696601 systemd-networkd[779]: eth0: Gained IPv6LL May 17 00:16:23.045322 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:16:23.401424 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:16:23.401424 ignition[953]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:16:23.405585 ignition[953]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:16:23.405585 ignition[953]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 17 00:16:23.405585 ignition[953]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:16:23.431907 ignition[953]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:16:23.437145 ignition[953]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:16:23.438757 ignition[953]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:16:23.438757 ignition[953]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 17 00:16:23.438757 ignition[953]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:16:23.438757 ignition[953]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:16:23.438757 ignition[953]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:16:23.438757 ignition[953]: INFO : files: files passed May 17 00:16:23.438757 ignition[953]: INFO : Ignition finished successfully May 17 00:16:23.440156 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:16:23.446765 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:16:23.449568 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:16:23.450922 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:16:23.451035 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:16:23.458853 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory May 17 00:16:23.461678 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:16:23.461678 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:16:23.464966 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:16:23.467951 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:16:23.470624 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:16:23.481697 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:16:23.509171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:16:23.509312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:16:23.511843 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:16:23.514192 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:16:23.515373 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:16:23.523684 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:16:23.536932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:16:23.539933 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:16:23.552623 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:16:23.553126 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:16:23.553456 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:16:23.601957 ignition[1009]: INFO : Ignition 2.19.0 May 17 00:16:23.601957 ignition[1009]: INFO : Stage: umount May 17 00:16:23.601957 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:16:23.601957 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:16:23.601957 ignition[1009]: INFO : umount: umount passed May 17 00:16:23.601957 ignition[1009]: INFO : Ignition finished successfully May 17 00:16:23.553783 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:16:23.553903 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:16:23.554545 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:16:23.554852 systemd[1]: Stopped target basic.target - Basic System. May 17 00:16:23.555195 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:16:23.555546 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:16:23.555868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:16:23.556203 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:16:23.556376 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:16:23.556959 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:16:23.557316 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:16:23.557820 systemd[1]: Stopped target swap.target - Swaps. May 17 00:16:23.557981 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:16:23.558087 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:16:23.558760 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:16:23.559150 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:16:23.559503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:16:23.559608 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:16:23.559879 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:16:23.559983 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:16:23.560426 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:16:23.560556 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:16:23.561019 systemd[1]: Stopped target paths.target - Path Units. May 17 00:16:23.561280 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:16:23.566542 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:16:23.566925 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:16:23.567238 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:16:23.567747 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:16:23.567838 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:16:23.568271 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:16:23.568357 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:16:23.568788 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:16:23.568902 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:16:23.569298 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:16:23.569398 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:16:23.570519 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:16:23.570841 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:16:23.570952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:16:23.571928 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:16:23.572221 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:16:23.572321 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:16:23.572714 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:16:23.572809 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:16:23.575877 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:16:23.575999 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:16:23.587988 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:16:23.588128 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:16:23.588443 systemd[1]: Stopped target network.target - Network. May 17 00:16:23.588586 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:16:23.588638 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:16:23.588980 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:16:23.589021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:16:23.589351 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:16:23.589392 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:16:23.589756 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:16:23.589803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:16:23.590259 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:16:23.590605 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:16:23.598093 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:16:23.598224 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:16:23.602073 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:16:23.602739 systemd-networkd[779]: eth0: DHCPv6 lease lost May 17 00:16:23.602874 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:16:23.602932 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:16:23.606186 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:16:23.606345 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:16:23.609226 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:16:23.609270 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:16:23.618639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:16:23.619980 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:16:23.620045 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:16:23.622810 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:16:23.622860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:16:23.625522 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:16:23.625572 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:16:23.627002 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:16:23.638019 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:16:23.638145 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:16:23.648267 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:16:23.648448 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:16:23.651105 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:16:23.651156 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:16:23.652721 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:16:23.652762 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:16:23.654577 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:16:23.654627 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:16:23.656673 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:16:23.656721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:16:23.658475 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:16:23.658538 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:23.669709 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:16:23.671523 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:16:23.671589 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:16:23.673827 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:16:23.673878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:23.676847 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:16:23.676966 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:16:23.785152 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:16:23.785286 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:16:23.787730 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:16:23.789800 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:16:23.789854 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:16:23.810735 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:16:23.816800 systemd[1]: Switching root. May 17 00:16:23.846689 systemd-journald[192]: Journal stopped May 17 00:16:25.053031 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 17 00:16:25.053096 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:16:25.053118 kernel: SELinux: policy capability open_perms=1 May 17 00:16:25.053130 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:16:25.053146 kernel: SELinux: policy capability always_check_network=0 May 17 00:16:25.053163 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:16:25.053179 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:16:25.053193 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:16:25.053208 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:16:25.053219 kernel: audit: type=1403 audit(1747440984.340:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:16:25.053232 systemd[1]: Successfully loaded SELinux policy in 45.883ms. May 17 00:16:25.053253 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.653ms. May 17 00:16:25.053265 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:16:25.053277 systemd[1]: Detected virtualization kvm. May 17 00:16:25.053289 systemd[1]: Detected architecture x86-64. May 17 00:16:25.053303 systemd[1]: Detected first boot. May 17 00:16:25.053315 systemd[1]: Initializing machine ID from VM UUID. May 17 00:16:25.053326 zram_generator::config[1071]: No configuration found. May 17 00:16:25.053340 systemd[1]: Populated /etc with preset unit settings. May 17 00:16:25.053351 systemd[1]: Queued start job for default target multi-user.target. May 17 00:16:25.053363 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:16:25.053376 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:16:25.053389 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:16:25.053401 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:16:25.053416 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:16:25.053428 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:16:25.053440 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:16:25.053452 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:16:25.053472 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:16:25.053507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:16:25.053520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:16:25.053532 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:16:25.053548 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:16:25.053560 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:16:25.053572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:16:25.053584 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:16:25.053595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:16:25.053607 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:16:25.053619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:16:25.053631 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:16:25.053643 systemd[1]: Reached target slices.target - Slice Units. May 17 00:16:25.053657 systemd[1]: Reached target swap.target - Swaps. May 17 00:16:25.053669 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:16:25.053682 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:16:25.053694 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:16:25.053706 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:16:25.053722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:16:25.053734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:16:25.053746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:16:25.053758 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:16:25.053773 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:16:25.053784 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:16:25.053796 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:16:25.053808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:25.053820 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:16:25.053832 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:16:25.053844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:16:25.053856 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:16:25.053870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:25.053882 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:16:25.053894 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:16:25.053905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:25.053917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:16:25.053929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:16:25.053942 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:16:25.053954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:16:25.053967 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:16:25.053981 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:16:25.053994 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 00:16:25.054006 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:16:25.054018 kernel: fuse: init (API version 7.39) May 17 00:16:25.054029 kernel: loop: module loaded May 17 00:16:25.054040 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:16:25.054052 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:16:25.054064 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:16:25.054078 kernel: ACPI: bus type drm_connector registered May 17 00:16:25.054089 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:16:25.054102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:25.054132 systemd-journald[1156]: Collecting audit messages is disabled. May 17 00:16:25.054153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:16:25.054165 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:16:25.054176 systemd-journald[1156]: Journal started May 17 00:16:25.054200 systemd-journald[1156]: Runtime Journal (/run/log/journal/5f22147f71b34f0c8b56b174584e68cf) is 6.0M, max 48.4M, 42.3M free. May 17 00:16:25.057510 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:16:25.058087 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:16:25.059274 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:16:25.060480 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:16:25.061725 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:16:25.063077 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:16:25.064701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:16:25.066304 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:16:25.066540 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:16:25.068165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:25.068373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:25.069888 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:16:25.070102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:16:25.071621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:16:25.071832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:16:25.073357 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:16:25.073581 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:16:25.075084 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:16:25.075312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:16:25.076793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:16:25.078720 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:16:25.080851 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:16:25.095582 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:16:25.103552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:16:25.105792 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:16:25.106944 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:16:25.108766 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:16:25.113890 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:16:25.115266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:16:25.120802 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:16:25.122692 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:16:25.125675 systemd-journald[1156]: Time spent on flushing to /var/log/journal/5f22147f71b34f0c8b56b174584e68cf is 19.596ms for 938 entries. May 17 00:16:25.125675 systemd-journald[1156]: System Journal (/var/log/journal/5f22147f71b34f0c8b56b174584e68cf) is 8.0M, max 195.6M, 187.6M free. May 17 00:16:25.192881 systemd-journald[1156]: Received client request to flush runtime journal. May 17 00:16:25.126240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:16:25.131227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:16:25.135671 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:16:25.137023 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:16:25.144020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:16:25.146856 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:16:25.160710 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:16:25.176217 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:16:25.177837 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:16:25.180864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:16:25.183274 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. May 17 00:16:25.183288 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. May 17 00:16:25.189442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:16:25.202990 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:16:25.204678 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:16:25.229426 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:16:25.237646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:16:25.253017 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. May 17 00:16:25.253036 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. May 17 00:16:25.258554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:16:25.689750 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:16:25.699795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:16:25.728668 systemd-udevd[1236]: Using default interface naming scheme 'v255'. May 17 00:16:25.744190 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:16:25.757635 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:16:25.770633 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:16:25.780848 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 17 00:16:25.800173 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1238) May 17 00:16:25.824771 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:16:25.851527 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:16:25.856842 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:16:25.857117 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:16:25.857290 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:16:25.857423 kernel: ACPI: button: Power Button [PWRF] May 17 00:16:25.855707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:16:25.873797 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:16:25.887580 systemd-networkd[1242]: lo: Link UP May 17 00:16:25.887592 systemd-networkd[1242]: lo: Gained carrier May 17 00:16:25.889234 systemd-networkd[1242]: Enumeration completed May 17 00:16:25.889375 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:16:25.889687 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:25.889692 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:16:25.891068 systemd-networkd[1242]: eth0: Link UP May 17 00:16:25.891072 systemd-networkd[1242]: eth0: Gained carrier May 17 00:16:25.891084 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:25.897518 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:16:25.899620 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:16:25.904529 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:16:25.921621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:25.997527 kernel: kvm_amd: TSC scaling supported May 17 00:16:25.997578 kernel: kvm_amd: Nested Virtualization enabled May 17 00:16:25.997591 kernel: kvm_amd: Nested Paging enabled May 17 00:16:25.997603 kernel: kvm_amd: LBR virtualization supported May 17 00:16:25.997615 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 17 00:16:25.997627 kernel: kvm_amd: Virtual GIF supported May 17 00:16:26.011348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:26.019518 kernel: EDAC MC: Ver: 3.0.0 May 17 00:16:26.049364 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:16:26.057761 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:16:26.066452 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:16:26.092832 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:16:26.094348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:16:26.108600 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:16:26.114084 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:16:26.145968 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:16:26.147539 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:16:26.148836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:16:26.148862 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:16:26.149921 systemd[1]: Reached target machines.target - Containers. May 17 00:16:26.151978 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:16:26.163695 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:16:26.166075 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:16:26.167232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:26.168136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:16:26.170468 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:16:26.173112 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:16:26.175896 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:16:26.187536 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:16:26.191730 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:16:26.201269 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:16:26.202071 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:16:26.207516 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:16:26.233516 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:16:26.265523 kernel: loop2: detected capacity change from 0 to 140768 May 17 00:16:26.303572 kernel: loop3: detected capacity change from 0 to 221472 May 17 00:16:26.311602 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:16:26.321511 kernel: loop5: detected capacity change from 0 to 140768 May 17 00:16:26.329671 (sd-merge)[1306]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 17 00:16:26.330270 (sd-merge)[1306]: Merged extensions into '/usr'. May 17 00:16:26.334076 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:16:26.334244 systemd[1]: Reloading... May 17 00:16:26.382661 zram_generator::config[1334]: No configuration found. May 17 00:16:26.411695 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:16:26.515475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:16:26.579840 systemd[1]: Reloading finished in 245 ms. May 17 00:16:26.599660 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:16:26.601268 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:16:26.618617 systemd[1]: Starting ensure-sysext.service... May 17 00:16:26.620664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:16:26.624947 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... May 17 00:16:26.624962 systemd[1]: Reloading... May 17 00:16:26.644089 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:16:26.644590 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:16:26.645635 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:16:26.645950 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. May 17 00:16:26.646043 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. May 17 00:16:26.653000 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:16:26.653011 systemd-tmpfiles[1379]: Skipping /boot May 17 00:16:26.670349 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:16:26.670361 systemd-tmpfiles[1379]: Skipping /boot May 17 00:16:26.678509 zram_generator::config[1410]: No configuration found. May 17 00:16:26.791473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:16:26.858980 systemd[1]: Reloading finished in 233 ms. May 17 00:16:26.880402 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:16:26.898693 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:16:26.901193 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:16:26.903578 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:16:26.908984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:16:26.913236 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:16:26.920884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:26.921066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:26.922593 systemd-networkd[1242]: eth0: Gained IPv6LL May 17 00:16:26.922912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:26.925384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:16:26.928776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:16:26.931820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:26.932049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:26.934245 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:16:26.942867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:26.943088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:26.945474 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:16:26.947248 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:16:26.947462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:16:26.953254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:16:26.953615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:16:26.959344 augenrules[1481]: No rules May 17 00:16:26.961855 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:16:26.966789 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:16:26.970835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:26.971083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:26.979776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:26.982368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:16:26.987774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:16:26.989089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:26.991809 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:16:26.993933 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:26.995140 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:16:26.998794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:26.999051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:27.000879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:16:27.001115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:16:27.002862 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:16:27.003081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:16:27.006743 systemd-resolved[1456]: Positive Trust Anchors: May 17 00:16:27.006761 systemd-resolved[1456]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:16:27.006792 systemd-resolved[1456]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:16:27.011079 systemd-resolved[1456]: Defaulting to hostname 'linux'. May 17 00:16:27.013076 systemd[1]: Finished ensure-sysext.service. May 17 00:16:27.014224 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:16:27.015823 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:16:27.019117 systemd[1]: Reached target network.target - Network. May 17 00:16:27.020074 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:16:27.021185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:16:27.022437 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:27.022653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:27.033700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:27.035934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:16:27.037876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:16:27.042605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:16:27.043773 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:27.045701 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:16:27.046849 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:16:27.046871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:16:27.047575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:27.047784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:27.049270 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:16:27.049514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:16:27.051001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:16:27.051206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:16:27.052764 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:16:27.052999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:16:27.057884 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:16:27.057959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:16:27.118878 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:16:27.120665 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:16:27.121921 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:16:27.123365 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:16:28.347454 systemd-resolved[1456]: Clock change detected. Flushing caches. May 17 00:16:28.347464 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:16:28.347509 systemd-timesyncd[1519]: Initial clock synchronization to Sat 2025-05-17 00:16:28.347366 UTC. May 17 00:16:28.347680 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:16:28.349101 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:16:28.349148 systemd[1]: Reached target paths.target - Path Units. May 17 00:16:28.350147 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:16:28.351551 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:16:28.352816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:16:28.354160 systemd[1]: Reached target timers.target - Timer Units. May 17 00:16:28.355889 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:16:28.359346 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:16:28.361645 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:16:28.366397 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:16:28.367502 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:16:28.368537 systemd[1]: Reached target basic.target - Basic System. May 17 00:16:28.369668 systemd[1]: System is tainted: cgroupsv1 May 17 00:16:28.369708 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:16:28.369733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:16:28.370886 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:16:28.373400 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 17 00:16:28.376028 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:16:28.379386 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:16:28.384406 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:16:28.385704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:16:28.389016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:16:28.392372 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:16:28.395234 jq[1534]: false May 17 00:16:28.397685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:16:28.403298 extend-filesystems[1536]: Found loop3 May 17 00:16:28.403298 extend-filesystems[1536]: Found loop4 May 17 00:16:28.403298 extend-filesystems[1536]: Found loop5 May 17 00:16:28.403298 extend-filesystems[1536]: Found sr0 May 17 00:16:28.403298 extend-filesystems[1536]: Found vda May 17 00:16:28.403298 extend-filesystems[1536]: Found vda1 May 17 00:16:28.403298 extend-filesystems[1536]: Found vda2 May 17 00:16:28.403298 extend-filesystems[1536]: Found vda3 May 17 00:16:28.403298 extend-filesystems[1536]: Found usr May 17 00:16:28.403298 extend-filesystems[1536]: Found vda4 May 17 00:16:28.403298 extend-filesystems[1536]: Found vda6 May 17 00:16:28.403298 extend-filesystems[1536]: Found vda7 May 17 00:16:28.403298 extend-filesystems[1536]: Found vda9 May 17 00:16:28.403298 extend-filesystems[1536]: Checking size of /dev/vda9 May 17 00:16:28.404572 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:16:28.407348 dbus-daemon[1532]: [system] SELinux support is enabled May 17 00:16:28.415616 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:16:28.426873 extend-filesystems[1536]: Resized partition /dev/vda9 May 17 00:16:28.419576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:16:28.428532 extend-filesystems[1563]: resize2fs 1.47.1 (20-May-2024) May 17 00:16:28.433299 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1249) May 17 00:16:28.436455 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:16:28.440278 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:16:28.440730 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:16:28.442151 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:16:28.451343 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:16:28.453811 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:16:28.465112 jq[1568]: true May 17 00:16:28.465926 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:16:28.466331 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:16:28.470075 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:16:28.470556 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:16:28.482990 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:16:28.485341 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:16:28.485676 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:16:28.506325 jq[1578]: true May 17 00:16:28.507796 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:16:28.517844 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 00:16:28.518279 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 17 00:16:28.533797 update_engine[1566]: I20250517 00:16:28.533715 1566 main.cc:92] Flatcar Update Engine starting May 17 00:16:28.536903 update_engine[1566]: I20250517 00:16:28.536867 1566 update_check_scheduler.cc:74] Next update check in 2m40s May 17 00:16:28.543292 tar[1577]: linux-amd64/helm May 17 00:16:28.551964 systemd[1]: Started update-engine.service - Update Engine. May 17 00:16:28.553540 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:16:28.553618 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:16:28.553640 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:16:28.554958 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:16:28.554974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:16:28.556923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:16:28.562372 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:16:28.572585 systemd-logind[1564]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:16:28.572609 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:16:28.574632 systemd-logind[1564]: New seat seat0. May 17 00:16:28.579155 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:16:28.579278 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:16:28.605441 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:16:28.607118 extend-filesystems[1563]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:16:28.607118 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:16:28.607118 extend-filesystems[1563]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:16:28.613192 extend-filesystems[1536]: Resized filesystem in /dev/vda9 May 17 00:16:28.609604 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:16:28.609975 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:16:28.620802 bash[1612]: Updated "/home/core/.ssh/authorized_keys" May 17 00:16:28.622488 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:16:28.625009 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 00:16:28.710281 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:16:28.738433 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:16:28.751531 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:16:28.758658 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:16:28.758976 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:16:28.763691 containerd[1579]: time="2025-05-17T00:16:28.762269791Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:16:28.772644 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:16:28.785143 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:16:28.795545 containerd[1579]: time="2025-05-17T00:16:28.795196513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:28.796780 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:16:28.797944 containerd[1579]: time="2025-05-17T00:16:28.797905544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:28.797944 containerd[1579]: time="2025-05-17T00:16:28.797940029Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:16:28.798003 containerd[1579]: time="2025-05-17T00:16:28.797955137Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:16:28.798180 containerd[1579]: time="2025-05-17T00:16:28.798146356Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:16:28.798180 containerd[1579]: time="2025-05-17T00:16:28.798177534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.798245341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.798400362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.798728708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.798751521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.798770857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.798783661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.798887035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.799128448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.799339714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.799353951Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:16:28.799544 containerd[1579]: time="2025-05-17T00:16:28.799460040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:16:28.800487 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:16:28.802029 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:16:28.802494 containerd[1579]: time="2025-05-17T00:16:28.799532516Z" level=info msg="metadata content store policy set" policy=shared May 17 00:16:28.809222 containerd[1579]: time="2025-05-17T00:16:28.809171075Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:16:28.809275 containerd[1579]: time="2025-05-17T00:16:28.809242278Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:16:28.809297 containerd[1579]: time="2025-05-17T00:16:28.809273898Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:16:28.809297 containerd[1579]: time="2025-05-17T00:16:28.809290699Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:16:28.809356 containerd[1579]: time="2025-05-17T00:16:28.809308412Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:16:28.809513 containerd[1579]: time="2025-05-17T00:16:28.809493089Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:16:28.809843 containerd[1579]: time="2025-05-17T00:16:28.809812318Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:16:28.810033 containerd[1579]: time="2025-05-17T00:16:28.810003777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:16:28.810033 containerd[1579]: time="2025-05-17T00:16:28.810025598Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:16:28.810089 containerd[1579]: time="2025-05-17T00:16:28.810038722Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:16:28.810089 containerd[1579]: time="2025-05-17T00:16:28.810052688Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810089 containerd[1579]: time="2025-05-17T00:16:28.810065052Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810089 containerd[1579]: time="2025-05-17T00:16:28.810077305Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810176 containerd[1579]: time="2025-05-17T00:16:28.810090720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810176 containerd[1579]: time="2025-05-17T00:16:28.810105998Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810176 containerd[1579]: time="2025-05-17T00:16:28.810120155Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810176 containerd[1579]: time="2025-05-17T00:16:28.810132097Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810176 containerd[1579]: time="2025-05-17T00:16:28.810143970Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:16:28.810176 containerd[1579]: time="2025-05-17T00:16:28.810171371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810299 containerd[1579]: time="2025-05-17T00:16:28.810186239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810299 containerd[1579]: time="2025-05-17T00:16:28.810200245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810299 containerd[1579]: time="2025-05-17T00:16:28.810215474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810299 containerd[1579]: time="2025-05-17T00:16:28.810229520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810299 containerd[1579]: time="2025-05-17T00:16:28.810273703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810299 containerd[1579]: time="2025-05-17T00:16:28.810286798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810299 containerd[1579]: time="2025-05-17T00:16:28.810298540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810313237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810328907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810341450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810353623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810366717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810382497Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810402675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810414567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810444 containerd[1579]: time="2025-05-17T00:16:28.810426239Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810477265Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810495509Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810506850Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810524483Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810534372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810552957Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810568666Z" level=info msg="NRI interface is disabled by configuration." May 17 00:16:28.810609 containerd[1579]: time="2025-05-17T00:16:28.810579156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:16:28.810967 containerd[1579]: time="2025-05-17T00:16:28.810837941Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:16:28.810967 containerd[1579]: time="2025-05-17T00:16:28.810907943Z" level=info msg="Connect containerd service" May 17 00:16:28.810967 containerd[1579]: time="2025-05-17T00:16:28.810948799Z" level=info msg="using legacy CRI server" May 17 00:16:28.810967 containerd[1579]: time="2025-05-17T00:16:28.810958768Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:16:28.811260 containerd[1579]: time="2025-05-17T00:16:28.811077030Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:16:28.811730 containerd[1579]: time="2025-05-17T00:16:28.811695009Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:16:28.811862 containerd[1579]: time="2025-05-17T00:16:28.811829842Z" level=info msg="Start subscribing containerd event" May 17 00:16:28.811900 containerd[1579]: time="2025-05-17T00:16:28.811884184Z" level=info msg="Start recovering state" May 17 00:16:28.812083 containerd[1579]: time="2025-05-17T00:16:28.811942944Z" level=info msg="Start event monitor" May 17 00:16:28.812083 containerd[1579]: time="2025-05-17T00:16:28.811959345Z" level=info msg="Start snapshots syncer" May 17 00:16:28.812083 containerd[1579]: time="2025-05-17T00:16:28.811968692Z" level=info msg="Start cni network conf syncer for default" May 17 00:16:28.812083 containerd[1579]: time="2025-05-17T00:16:28.811976377Z" level=info msg="Start streaming server" May 17 00:16:28.812352 containerd[1579]: time="2025-05-17T00:16:28.812331653Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:16:28.812400 containerd[1579]: time="2025-05-17T00:16:28.812381877Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:16:28.812449 containerd[1579]: time="2025-05-17T00:16:28.812435358Z" level=info msg="containerd successfully booted in 0.051329s" May 17 00:16:28.813046 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:16:28.975827 tar[1577]: linux-amd64/LICENSE May 17 00:16:28.975827 tar[1577]: linux-amd64/README.md May 17 00:16:28.989926 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:16:29.366620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:16:29.368846 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:16:29.370170 systemd[1]: Startup finished in 6.326s (kernel) + 3.852s (userspace) = 10.178s. May 17 00:16:29.383857 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:16:29.813097 kubelet[1665]: E0517 00:16:29.812979 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:16:29.816751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:16:29.817041 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:16:32.971771 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:16:32.981447 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:38224.service - OpenSSH per-connection server daemon (10.0.0.1:38224). May 17 00:16:33.019680 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 38224 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:33.021677 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:33.030069 systemd-logind[1564]: New session 1 of user core. May 17 00:16:33.031126 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:16:33.039496 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:16:33.050970 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:16:33.053621 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:16:33.061744 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:16:33.184806 systemd[1683]: Queued start job for default target default.target. May 17 00:16:33.185226 systemd[1683]: Created slice app.slice - User Application Slice. May 17 00:16:33.185244 systemd[1683]: Reached target paths.target - Paths. May 17 00:16:33.185270 systemd[1683]: Reached target timers.target - Timers. May 17 00:16:33.198453 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:16:33.205039 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:16:33.205126 systemd[1683]: Reached target sockets.target - Sockets. May 17 00:16:33.205139 systemd[1683]: Reached target basic.target - Basic System. May 17 00:16:33.205188 systemd[1683]: Reached target default.target - Main User Target. May 17 00:16:33.205224 systemd[1683]: Startup finished in 136ms. May 17 00:16:33.206181 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:16:33.208304 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:16:33.274458 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:38226.service - OpenSSH per-connection server daemon (10.0.0.1:38226). May 17 00:16:33.310579 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 38226 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:33.312183 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:33.316118 systemd-logind[1564]: New session 2 of user core. May 17 00:16:33.328487 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:16:33.382359 sshd[1696]: pam_unix(sshd:session): session closed for user core May 17 00:16:33.395463 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:38230.service - OpenSSH per-connection server daemon (10.0.0.1:38230). May 17 00:16:33.395929 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:38226.service: Deactivated successfully. May 17 00:16:33.398298 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. May 17 00:16:33.399670 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:16:33.400336 systemd-logind[1564]: Removed session 2. May 17 00:16:33.429002 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 38230 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:33.430432 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:33.434361 systemd-logind[1564]: New session 3 of user core. May 17 00:16:33.440485 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:16:33.489423 sshd[1701]: pam_unix(sshd:session): session closed for user core May 17 00:16:33.498455 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:38240.service - OpenSSH per-connection server daemon (10.0.0.1:38240). May 17 00:16:33.499037 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:38230.service: Deactivated successfully. May 17 00:16:33.501378 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. May 17 00:16:33.502210 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:16:33.503262 systemd-logind[1564]: Removed session 3. May 17 00:16:33.532060 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 38240 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:33.533405 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:33.537001 systemd-logind[1564]: New session 4 of user core. May 17 00:16:33.547480 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:16:33.601561 sshd[1709]: pam_unix(sshd:session): session closed for user core May 17 00:16:33.609479 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:38246.service - OpenSSH per-connection server daemon (10.0.0.1:38246). May 17 00:16:33.609933 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:38240.service: Deactivated successfully. May 17 00:16:33.612489 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. May 17 00:16:33.613671 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:16:33.614639 systemd-logind[1564]: Removed session 4. May 17 00:16:33.643607 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 38246 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:33.645279 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:33.649666 systemd-logind[1564]: New session 5 of user core. May 17 00:16:33.664670 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:16:33.723855 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:16:33.724219 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:16:33.745288 sudo[1724]: pam_unix(sudo:session): session closed for user root May 17 00:16:33.747655 sshd[1717]: pam_unix(sshd:session): session closed for user core May 17 00:16:33.756475 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:38262.service - OpenSSH per-connection server daemon (10.0.0.1:38262). May 17 00:16:33.756929 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:38246.service: Deactivated successfully. May 17 00:16:33.759339 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. May 17 00:16:33.761017 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:16:33.761692 systemd-logind[1564]: Removed session 5. May 17 00:16:33.794180 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 38262 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:33.795969 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:33.799904 systemd-logind[1564]: New session 6 of user core. May 17 00:16:33.808497 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:16:33.862621 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:16:33.862937 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:16:33.866159 sudo[1734]: pam_unix(sudo:session): session closed for user root May 17 00:16:33.871603 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:16:33.871988 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:16:33.899604 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:16:33.901961 auditctl[1737]: No rules May 17 00:16:33.903474 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:16:33.903861 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:16:33.906233 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:16:33.934887 augenrules[1756]: No rules May 17 00:16:33.936486 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:16:33.937731 sudo[1733]: pam_unix(sudo:session): session closed for user root May 17 00:16:33.939808 sshd[1726]: pam_unix(sshd:session): session closed for user core May 17 00:16:33.952497 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:38268.service - OpenSSH per-connection server daemon (10.0.0.1:38268). May 17 00:16:33.953196 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:38262.service: Deactivated successfully. May 17 00:16:33.955138 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:16:33.955795 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. May 17 00:16:33.956967 systemd-logind[1564]: Removed session 6. May 17 00:16:33.986722 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 38268 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:33.988033 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:33.991668 systemd-logind[1564]: New session 7 of user core. May 17 00:16:34.002500 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:16:34.053305 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:16:34.053622 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:16:34.320447 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:16:34.320732 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:16:34.582811 dockerd[1787]: time="2025-05-17T00:16:34.582610627Z" level=info msg="Starting up" May 17 00:16:35.196378 dockerd[1787]: time="2025-05-17T00:16:35.196335870Z" level=info msg="Loading containers: start." May 17 00:16:35.304283 kernel: Initializing XFRM netlink socket May 17 00:16:35.376084 systemd-networkd[1242]: docker0: Link UP May 17 00:16:35.403706 dockerd[1787]: time="2025-05-17T00:16:35.403654060Z" level=info msg="Loading containers: done." May 17 00:16:35.418311 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2102863109-merged.mount: Deactivated successfully. May 17 00:16:35.421321 dockerd[1787]: time="2025-05-17T00:16:35.421284860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:16:35.421403 dockerd[1787]: time="2025-05-17T00:16:35.421388384Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:16:35.421508 dockerd[1787]: time="2025-05-17T00:16:35.421491087Z" level=info msg="Daemon has completed initialization" May 17 00:16:35.462721 dockerd[1787]: time="2025-05-17T00:16:35.462366057Z" level=info msg="API listen on /run/docker.sock" May 17 00:16:35.462556 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:16:36.125227 containerd[1579]: time="2025-05-17T00:16:36.119899432Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:16:38.270689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366471098.mount: Deactivated successfully. May 17 00:16:40.067245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:16:40.080423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:16:40.245605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:16:40.249982 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:16:40.994798 kubelet[1958]: E0517 00:16:40.994747 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:16:41.000839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:16:41.001142 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:16:48.085391 containerd[1579]: time="2025-05-17T00:16:48.085320010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:48.117801 containerd[1579]: time="2025-05-17T00:16:48.117727257Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:16:48.164474 containerd[1579]: time="2025-05-17T00:16:48.164421913Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:48.207619 containerd[1579]: time="2025-05-17T00:16:48.207564376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:48.208769 containerd[1579]: time="2025-05-17T00:16:48.208730022Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 12.088782621s" May 17 00:16:48.208821 containerd[1579]: time="2025-05-17T00:16:48.208776550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:16:48.209438 containerd[1579]: time="2025-05-17T00:16:48.209411871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:16:51.177137 containerd[1579]: time="2025-05-17T00:16:51.177054260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:51.214862 containerd[1579]: time="2025-05-17T00:16:51.214812965Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:16:51.251575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:16:51.252822 containerd[1579]: time="2025-05-17T00:16:51.252761575Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:51.271521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:16:51.277310 containerd[1579]: time="2025-05-17T00:16:51.277228989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:51.278469 containerd[1579]: time="2025-05-17T00:16:51.278433368Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 3.068988505s" May 17 00:16:51.278558 containerd[1579]: time="2025-05-17T00:16:51.278474846Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:16:51.279044 containerd[1579]: time="2025-05-17T00:16:51.278943585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:16:51.429238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:16:51.433636 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:16:51.513203 kubelet[2029]: E0517 00:16:51.513145 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:16:51.517350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:16:51.517626 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:16:55.130739 containerd[1579]: time="2025-05-17T00:16:55.130660255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:55.169565 containerd[1579]: time="2025-05-17T00:16:55.169524884Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:16:55.178910 containerd[1579]: time="2025-05-17T00:16:55.178809940Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:55.203094 containerd[1579]: time="2025-05-17T00:16:55.203060146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:55.204189 containerd[1579]: time="2025-05-17T00:16:55.204146494Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 3.925154729s" May 17 00:16:55.204229 containerd[1579]: time="2025-05-17T00:16:55.204194013Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:16:55.204738 containerd[1579]: time="2025-05-17T00:16:55.204716303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:16:58.571328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854038862.mount: Deactivated successfully. May 17 00:16:59.492575 containerd[1579]: time="2025-05-17T00:16:59.492500290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:59.551112 containerd[1579]: time="2025-05-17T00:16:59.551054321Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:16:59.594524 containerd[1579]: time="2025-05-17T00:16:59.594472020Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:59.626311 containerd[1579]: time="2025-05-17T00:16:59.626276466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:59.626961 containerd[1579]: time="2025-05-17T00:16:59.626932246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 4.422184514s" May 17 00:16:59.627005 containerd[1579]: time="2025-05-17T00:16:59.626968013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:16:59.627473 containerd[1579]: time="2025-05-17T00:16:59.627447863Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:17:01.570078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:17:01.578443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:01.788602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:01.794241 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:01.997153 kubelet[2063]: E0517 00:17:01.996975 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:02.001077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:02.001378 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:02.236161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143027202.mount: Deactivated successfully. May 17 00:17:05.778671 containerd[1579]: time="2025-05-17T00:17:05.778604677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:05.779343 containerd[1579]: time="2025-05-17T00:17:05.779294384Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:17:05.780568 containerd[1579]: time="2025-05-17T00:17:05.780519630Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:05.783235 containerd[1579]: time="2025-05-17T00:17:05.783207549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:05.784337 containerd[1579]: time="2025-05-17T00:17:05.784289726Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 6.15680859s" May 17 00:17:05.784368 containerd[1579]: time="2025-05-17T00:17:05.784335167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:17:05.785273 containerd[1579]: time="2025-05-17T00:17:05.785229945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:17:06.269369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452660095.mount: Deactivated successfully. May 17 00:17:06.275571 containerd[1579]: time="2025-05-17T00:17:06.275530517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:06.276300 containerd[1579]: time="2025-05-17T00:17:06.276257534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:17:06.277405 containerd[1579]: time="2025-05-17T00:17:06.277374158Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:06.279478 containerd[1579]: time="2025-05-17T00:17:06.279436515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:06.280088 containerd[1579]: time="2025-05-17T00:17:06.280048084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.774199ms" May 17 00:17:06.280129 containerd[1579]: time="2025-05-17T00:17:06.280087204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:17:06.280551 containerd[1579]: time="2025-05-17T00:17:06.280519879Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:17:06.964866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470181080.mount: Deactivated successfully. May 17 00:17:10.141993 containerd[1579]: time="2025-05-17T00:17:10.141886889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:10.143315 containerd[1579]: time="2025-05-17T00:17:10.143273042Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:17:10.145150 containerd[1579]: time="2025-05-17T00:17:10.144946388Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:10.149840 containerd[1579]: time="2025-05-17T00:17:10.149799583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:10.151356 containerd[1579]: time="2025-05-17T00:17:10.151308629Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.870762694s" May 17 00:17:10.151408 containerd[1579]: time="2025-05-17T00:17:10.151362408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:17:12.070113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:17:12.079401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:12.386402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:12.388626 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:12.518363 kubelet[2220]: E0517 00:17:12.518303 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:12.524525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:12.524816 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:12.547403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:12.563556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:12.590810 systemd[1]: Reloading requested from client PID 2237 ('systemctl') (unit session-7.scope)... May 17 00:17:12.590827 systemd[1]: Reloading... May 17 00:17:12.666510 zram_generator::config[2279]: No configuration found. May 17 00:17:13.570357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:17:13.649405 systemd[1]: Reloading finished in 1058 ms. May 17 00:17:13.701173 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:17:13.701445 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:17:13.701807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:13.703551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:13.876974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:13.882839 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:17:13.925471 kubelet[2336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:17:13.925471 kubelet[2336]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:17:13.925471 kubelet[2336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:17:13.925797 kubelet[2336]: I0517 00:17:13.925587 2336 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:17:14.230510 update_engine[1566]: I20250517 00:17:14.230314 1566 update_attempter.cc:509] Updating boot flags... May 17 00:17:14.371527 kubelet[2336]: I0517 00:17:14.371474 2336 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:17:14.371527 kubelet[2336]: I0517 00:17:14.371524 2336 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:17:14.372052 kubelet[2336]: I0517 00:17:14.372014 2336 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:17:14.977887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2354) May 17 00:17:14.979484 kubelet[2336]: I0517 00:17:14.979461 2336 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:17:14.980753 kubelet[2336]: E0517 00:17:14.980440 2336 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:14.995218 kubelet[2336]: E0517 00:17:14.995172 2336 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:17:14.995218 kubelet[2336]: I0517 00:17:14.995216 2336 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:17:15.002278 kubelet[2336]: I0517 00:17:15.002195 2336 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:17:15.003525 kubelet[2336]: I0517 00:17:15.003491 2336 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:17:15.003827 kubelet[2336]: I0517 00:17:15.003774 2336 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:17:15.004008 kubelet[2336]: I0517 00:17:15.003825 2336 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:17:15.004127 kubelet[2336]: I0517 00:17:15.004014 2336 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:17:15.004127 kubelet[2336]: I0517 00:17:15.004024 2336 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:17:15.004182 kubelet[2336]: I0517 00:17:15.004167 2336 state_mem.go:36] "Initialized new in-memory state store" May 17 00:17:15.009159 kubelet[2336]: I0517 00:17:15.008954 2336 kubelet.go:408] "Attempting to sync node with API server" May 17 00:17:15.009159 kubelet[2336]: I0517 00:17:15.008979 2336 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:17:15.009159 kubelet[2336]: I0517 00:17:15.009020 2336 kubelet.go:314] "Adding apiserver pod source" May 17 00:17:15.009159 kubelet[2336]: I0517 00:17:15.009040 2336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:17:15.014192 kubelet[2336]: W0517 00:17:15.014049 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:15.014192 kubelet[2336]: E0517 00:17:15.014103 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:15.014383 kubelet[2336]: I0517 00:17:15.014369 2336 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:17:15.014885 kubelet[2336]: I0517 00:17:15.014870 2336 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:17:15.014993 kubelet[2336]: W0517 00:17:15.014983 2336 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:17:15.017616 kubelet[2336]: W0517 00:17:15.017568 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:15.017678 kubelet[2336]: E0517 00:17:15.017615 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:15.018263 kubelet[2336]: I0517 00:17:15.018228 2336 server.go:1274] "Started kubelet" May 17 00:17:15.019002 kubelet[2336]: I0517 00:17:15.018955 2336 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:17:15.031276 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2356) May 17 00:17:15.046158 kubelet[2336]: I0517 00:17:15.045177 2336 server.go:449] "Adding debug handlers to kubelet server" May 17 00:17:15.046158 kubelet[2336]: I0517 00:17:15.045534 2336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:17:15.047976 kubelet[2336]: I0517 00:17:15.047909 2336 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:17:15.061335 kubelet[2336]: I0517 00:17:15.061287 2336 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:17:15.061429 kubelet[2336]: I0517 00:17:15.047917 2336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:17:15.062610 kubelet[2336]: E0517 00:17:15.061103 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1840285cee3377f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:17:15.01820517 +0000 UTC m=+1.131237535,LastTimestamp:2025-05-17 00:17:15.01820517 +0000 UTC m=+1.131237535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:17:15.062937 kubelet[2336]: I0517 00:17:15.062915 2336 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:17:15.063304 kubelet[2336]: I0517 00:17:15.063204 2336 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:17:15.063304 kubelet[2336]: I0517 00:17:15.063244 2336 reconciler.go:26] "Reconciler: start to sync state" May 17 00:17:15.063574 kubelet[2336]: W0517 00:17:15.063517 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:15.063574 kubelet[2336]: E0517 00:17:15.063558 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:15.063879 kubelet[2336]: E0517 00:17:15.063595 2336 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:17:15.068752 kubelet[2336]: E0517 00:17:15.063993 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" May 17 00:17:15.069071 kubelet[2336]: I0517 00:17:15.068984 2336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:17:15.070674 kubelet[2336]: I0517 00:17:15.070547 2336 factory.go:221] Registration of the containerd container factory successfully May 17 00:17:15.070674 kubelet[2336]: I0517 00:17:15.070559 2336 factory.go:221] Registration of the systemd container factory successfully May 17 00:17:15.075299 kubelet[2336]: E0517 00:17:15.075280 2336 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:17:15.094279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2356) May 17 00:17:15.133888 kubelet[2336]: I0517 00:17:15.133369 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:17:15.136401 kubelet[2336]: I0517 00:17:15.136380 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:17:15.136451 kubelet[2336]: I0517 00:17:15.136409 2336 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:17:15.136847 kubelet[2336]: I0517 00:17:15.136825 2336 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:17:15.136911 kubelet[2336]: E0517 00:17:15.136885 2336 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:17:15.143083 kubelet[2336]: W0517 00:17:15.141404 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:15.143083 kubelet[2336]: E0517 00:17:15.141466 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:15.143563 kubelet[2336]: I0517 00:17:15.143367 2336 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:17:15.143595 kubelet[2336]: I0517 00:17:15.143562 2336 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:17:15.143670 kubelet[2336]: I0517 00:17:15.143581 2336 state_mem.go:36] "Initialized new in-memory state store" May 17 00:17:15.145487 kubelet[2336]: I0517 00:17:15.145456 2336 policy_none.go:49] "None policy: Start" May 17 00:17:15.146099 kubelet[2336]: I0517 00:17:15.146084 2336 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:17:15.146158 kubelet[2336]: I0517 00:17:15.146106 2336 state_mem.go:35] "Initializing new in-memory state store" May 17 00:17:15.152525 kubelet[2336]: I0517 00:17:15.152497 2336 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:17:15.152703 kubelet[2336]: I0517 00:17:15.152688 2336 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:17:15.152730 kubelet[2336]: I0517 00:17:15.152703 2336 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:17:15.153571 kubelet[2336]: I0517 00:17:15.153547 2336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:17:15.156902 kubelet[2336]: E0517 00:17:15.156861 2336 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:17:15.253826 kubelet[2336]: I0517 00:17:15.253711 2336 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:17:15.254087 kubelet[2336]: E0517 00:17:15.254052 2336 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 17 00:17:15.264546 kubelet[2336]: E0517 00:17:15.264512 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" May 17 00:17:15.365077 kubelet[2336]: I0517 00:17:15.365002 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79304e1e0e8e6bad9a40772a89516b1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79304e1e0e8e6bad9a40772a89516b1b\") " pod="kube-system/kube-apiserver-localhost" May 17 00:17:15.365077 kubelet[2336]: I0517 00:17:15.365065 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79304e1e0e8e6bad9a40772a89516b1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79304e1e0e8e6bad9a40772a89516b1b\") " pod="kube-system/kube-apiserver-localhost" May 17 00:17:15.365318 kubelet[2336]: I0517 00:17:15.365102 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:15.365318 kubelet[2336]: I0517 00:17:15.365123 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:15.365318 kubelet[2336]: I0517 00:17:15.365147 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:17:15.365318 kubelet[2336]: I0517 00:17:15.365181 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79304e1e0e8e6bad9a40772a89516b1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79304e1e0e8e6bad9a40772a89516b1b\") " pod="kube-system/kube-apiserver-localhost" May 17 00:17:15.365318 kubelet[2336]: I0517 00:17:15.365216 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:15.365591 kubelet[2336]: I0517 00:17:15.365236 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:15.365591 kubelet[2336]: I0517 00:17:15.365290 2336 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:15.456280 kubelet[2336]: I0517 00:17:15.456227 2336 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:17:15.456757 kubelet[2336]: E0517 00:17:15.456715 2336 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 17 00:17:15.544208 kubelet[2336]: E0517 00:17:15.544063 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:15.545011 kubelet[2336]: E0517 00:17:15.544991 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:15.545090 containerd[1579]: time="2025-05-17T00:17:15.544994978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79304e1e0e8e6bad9a40772a89516b1b,Namespace:kube-system,Attempt:0,}" May 17 00:17:15.545488 containerd[1579]: time="2025-05-17T00:17:15.545384525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 17 00:17:15.546519 kubelet[2336]: E0517 00:17:15.546492 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:15.546752 containerd[1579]: time="2025-05-17T00:17:15.546725563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 17 00:17:15.665983 kubelet[2336]: E0517 00:17:15.665920 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" May 17 00:17:15.858775 kubelet[2336]: I0517 00:17:15.858743 2336 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:17:15.859142 kubelet[2336]: E0517 00:17:15.859089 2336 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 17 00:17:15.895034 kubelet[2336]: W0517 00:17:15.894943 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:15.895034 kubelet[2336]: E0517 00:17:15.895028 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:15.996312 kubelet[2336]: W0517 00:17:15.996203 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:15.996710 kubelet[2336]: E0517 00:17:15.996313 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:16.241735 kubelet[2336]: W0517 00:17:16.241611 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:16.241735 kubelet[2336]: E0517 00:17:16.241674 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:16.266316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750680431.mount: Deactivated successfully. May 17 00:17:16.273454 containerd[1579]: time="2025-05-17T00:17:16.273403354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:17:16.274324 containerd[1579]: time="2025-05-17T00:17:16.274288426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:17:16.275183 containerd[1579]: time="2025-05-17T00:17:16.275141877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:17:16.276061 containerd[1579]: time="2025-05-17T00:17:16.276040413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:17:16.276755 kubelet[2336]: W0517 00:17:16.276692 2336 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 17 00:17:16.276831 kubelet[2336]: E0517 00:17:16.276767 2336 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:16.277159 containerd[1579]: time="2025-05-17T00:17:16.277126965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:17:16.278858 containerd[1579]: time="2025-05-17T00:17:16.278829191Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:17:16.280141 containerd[1579]: time="2025-05-17T00:17:16.280100123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:17:16.282240 containerd[1579]: time="2025-05-17T00:17:16.282212695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:17:16.284171 containerd[1579]: time="2025-05-17T00:17:16.284140998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.066964ms" May 17 00:17:16.284852 containerd[1579]: time="2025-05-17T00:17:16.284806203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.349926ms" May 17 00:17:16.285434 containerd[1579]: time="2025-05-17T00:17:16.285376022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 738.598455ms" May 17 00:17:16.468084 kubelet[2336]: E0517 00:17:16.468020 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" May 17 00:17:16.533895 containerd[1579]: time="2025-05-17T00:17:16.533030139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:16.533895 containerd[1579]: time="2025-05-17T00:17:16.533086343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:16.533895 containerd[1579]: time="2025-05-17T00:17:16.533320605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:16.534043 containerd[1579]: time="2025-05-17T00:17:16.533842446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:16.534043 containerd[1579]: time="2025-05-17T00:17:16.533943803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:16.534668 containerd[1579]: time="2025-05-17T00:17:16.534104328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:16.534668 containerd[1579]: time="2025-05-17T00:17:16.534182854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:16.534668 containerd[1579]: time="2025-05-17T00:17:16.534409521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:16.540901 containerd[1579]: time="2025-05-17T00:17:16.540761383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:16.540901 containerd[1579]: time="2025-05-17T00:17:16.540828417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:16.540901 containerd[1579]: time="2025-05-17T00:17:16.540850558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:16.541682 containerd[1579]: time="2025-05-17T00:17:16.541635754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:16.657020 containerd[1579]: time="2025-05-17T00:17:16.656944484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"0439b71eddde0e3a7560931cbd428804e0b4cd985661a75f045a656216390eb2\"" May 17 00:17:16.659509 kubelet[2336]: E0517 00:17:16.659455 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:16.661556 kubelet[2336]: I0517 00:17:16.660991 2336 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:17:16.661556 kubelet[2336]: E0517 00:17:16.661465 2336 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 17 00:17:16.662725 containerd[1579]: time="2025-05-17T00:17:16.662628496Z" level=info msg="CreateContainer within sandbox \"0439b71eddde0e3a7560931cbd428804e0b4cd985661a75f045a656216390eb2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:17:16.667114 containerd[1579]: time="2025-05-17T00:17:16.667064748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"df7aac333bfe1275f8a7ba3d6caa76a136ea3a3c389b09ceecb3b3c3b9054c48\"" May 17 00:17:16.667851 kubelet[2336]: E0517 00:17:16.667829 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:16.669462 containerd[1579]: time="2025-05-17T00:17:16.669434675Z" level=info msg="CreateContainer within sandbox \"df7aac333bfe1275f8a7ba3d6caa76a136ea3a3c389b09ceecb3b3c3b9054c48\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:17:16.671589 containerd[1579]: time="2025-05-17T00:17:16.671524595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79304e1e0e8e6bad9a40772a89516b1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7f374040276625a75fa2d8654fa3355eac8d8fe56369475fb23db55ccf33b1f\"" May 17 00:17:16.672126 kubelet[2336]: E0517 00:17:16.672053 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:16.673824 containerd[1579]: time="2025-05-17T00:17:16.673797072Z" level=info msg="CreateContainer within sandbox \"b7f374040276625a75fa2d8654fa3355eac8d8fe56369475fb23db55ccf33b1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:17:17.127834 kubelet[2336]: E0517 00:17:17.127788 2336 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 17 00:17:17.318361 containerd[1579]: time="2025-05-17T00:17:17.318301498Z" level=info msg="CreateContainer within sandbox \"0439b71eddde0e3a7560931cbd428804e0b4cd985661a75f045a656216390eb2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c1dd91622ec50197e12da2ce97fcd47fcc8568e02771dccdb9a4ff79ae6665d\"" May 17 00:17:17.319117 containerd[1579]: time="2025-05-17T00:17:17.319071348Z" level=info msg="StartContainer for \"0c1dd91622ec50197e12da2ce97fcd47fcc8568e02771dccdb9a4ff79ae6665d\"" May 17 00:17:17.347015 containerd[1579]: time="2025-05-17T00:17:17.346975105Z" level=info msg="CreateContainer within sandbox \"df7aac333bfe1275f8a7ba3d6caa76a136ea3a3c389b09ceecb3b3c3b9054c48\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc003a7eed91a488ad32108fcde4ac5bcc2c91b9673814f6b41d369993ffcfe7\"" May 17 00:17:17.347524 containerd[1579]: time="2025-05-17T00:17:17.347487240Z" level=info msg="StartContainer for \"dc003a7eed91a488ad32108fcde4ac5bcc2c91b9673814f6b41d369993ffcfe7\"" May 17 00:17:17.352349 containerd[1579]: time="2025-05-17T00:17:17.352310526Z" level=info msg="CreateContainer within sandbox \"b7f374040276625a75fa2d8654fa3355eac8d8fe56369475fb23db55ccf33b1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18a9724f01d76b3fc2b9a6b1e2558dab62380d30986b02e5e121b654dc5a641f\"" May 17 00:17:17.354446 containerd[1579]: time="2025-05-17T00:17:17.354402604Z" level=info msg="StartContainer for \"18a9724f01d76b3fc2b9a6b1e2558dab62380d30986b02e5e121b654dc5a641f\"" May 17 00:17:17.399865 containerd[1579]: time="2025-05-17T00:17:17.398624610Z" level=info msg="StartContainer for \"0c1dd91622ec50197e12da2ce97fcd47fcc8568e02771dccdb9a4ff79ae6665d\" returns successfully" May 17 00:17:17.436603 containerd[1579]: time="2025-05-17T00:17:17.436538650Z" level=info msg="StartContainer for \"dc003a7eed91a488ad32108fcde4ac5bcc2c91b9673814f6b41d369993ffcfe7\" returns successfully" May 17 00:17:17.441513 containerd[1579]: time="2025-05-17T00:17:17.441423839Z" level=info msg="StartContainer for \"18a9724f01d76b3fc2b9a6b1e2558dab62380d30986b02e5e121b654dc5a641f\" returns successfully" May 17 00:17:18.150269 kubelet[2336]: E0517 00:17:18.150205 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:18.152557 kubelet[2336]: E0517 00:17:18.152426 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:18.153427 kubelet[2336]: E0517 00:17:18.153400 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:18.283995 kubelet[2336]: I0517 00:17:18.263957 2336 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:17:18.460600 kubelet[2336]: E0517 00:17:18.459539 2336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 00:17:18.643664 kubelet[2336]: I0517 00:17:18.643625 2336 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:17:18.643771 kubelet[2336]: E0517 00:17:18.643676 2336 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 17 00:17:18.837318 kubelet[2336]: E0517 00:17:18.837210 2336 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:17:18.938011 kubelet[2336]: E0517 00:17:18.937954 2336 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:17:19.038329 kubelet[2336]: E0517 00:17:19.038210 2336 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:17:19.159351 kubelet[2336]: E0517 00:17:19.159229 2336 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 00:17:19.159351 kubelet[2336]: E0517 00:17:19.159267 2336 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 00:17:19.159785 kubelet[2336]: E0517 00:17:19.159390 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:19.159785 kubelet[2336]: E0517 00:17:19.159229 2336 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 00:17:19.159785 kubelet[2336]: E0517 00:17:19.159447 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:19.159785 kubelet[2336]: E0517 00:17:19.159504 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:20.013857 kubelet[2336]: I0517 00:17:20.013822 2336 apiserver.go:52] "Watching apiserver" May 17 00:17:20.063515 kubelet[2336]: I0517 00:17:20.063484 2336 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:17:20.163081 kubelet[2336]: E0517 00:17:20.163044 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:20.212345 kubelet[2336]: E0517 00:17:20.212313 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:20.318032 systemd[1]: Reloading requested from client PID 2630 ('systemctl') (unit session-7.scope)... May 17 00:17:20.318057 systemd[1]: Reloading... May 17 00:17:20.391340 zram_generator::config[2673]: No configuration found. May 17 00:17:20.504701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:17:20.586293 systemd[1]: Reloading finished in 267 ms. May 17 00:17:20.619489 kubelet[2336]: I0517 00:17:20.619444 2336 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:17:20.619509 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:20.642666 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:17:20.643066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:20.654473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:20.830016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:20.835439 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:17:20.887538 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:17:20.887538 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:17:20.887538 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:17:20.887538 kubelet[2724]: I0517 00:17:20.886704 2724 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:17:20.893817 kubelet[2724]: I0517 00:17:20.893782 2724 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:17:20.893817 kubelet[2724]: I0517 00:17:20.893806 2724 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:17:20.894008 kubelet[2724]: I0517 00:17:20.893986 2724 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:17:20.895140 kubelet[2724]: I0517 00:17:20.895105 2724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:17:20.896761 kubelet[2724]: I0517 00:17:20.896731 2724 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:17:20.899314 kubelet[2724]: E0517 00:17:20.899282 2724 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:17:20.899314 kubelet[2724]: I0517 00:17:20.899308 2724 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:17:20.903529 kubelet[2724]: I0517 00:17:20.903503 2724 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:17:20.903919 kubelet[2724]: I0517 00:17:20.903895 2724 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:17:20.904069 kubelet[2724]: I0517 00:17:20.904039 2724 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:17:20.904201 kubelet[2724]: I0517 00:17:20.904059 2724 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:17:20.904295 kubelet[2724]: I0517 00:17:20.904202 2724 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:17:20.904295 kubelet[2724]: I0517 00:17:20.904211 2724 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:17:20.904295 kubelet[2724]: I0517 00:17:20.904234 2724 state_mem.go:36] "Initialized new in-memory state store" May 17 00:17:20.904364 kubelet[2724]: I0517 00:17:20.904332 2724 kubelet.go:408] "Attempting to sync node with API server" May 17 00:17:20.904364 kubelet[2724]: I0517 00:17:20.904343 2724 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:17:20.904401 kubelet[2724]: I0517 00:17:20.904367 2724 kubelet.go:314] "Adding apiserver pod source" May 17 00:17:20.904401 kubelet[2724]: I0517 00:17:20.904378 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:17:20.906117 kubelet[2724]: I0517 00:17:20.905688 2724 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:17:20.906352 kubelet[2724]: I0517 00:17:20.906332 2724 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:17:20.906841 kubelet[2724]: I0517 00:17:20.906816 2724 server.go:1274] "Started kubelet" May 17 00:17:20.908965 kubelet[2724]: I0517 00:17:20.908599 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:17:20.910591 kubelet[2724]: I0517 00:17:20.910567 2724 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:17:20.910667 kubelet[2724]: I0517 00:17:20.910326 2724 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:17:20.912666 kubelet[2724]: I0517 00:17:20.911955 2724 server.go:449] "Adding debug handlers to kubelet server" May 17 00:17:20.913944 kubelet[2724]: I0517 00:17:20.913918 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:17:20.914683 kubelet[2724]: I0517 00:17:20.914648 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:17:20.914846 kubelet[2724]: I0517 00:17:20.914825 2724 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:17:20.914962 kubelet[2724]: I0517 00:17:20.914948 2724 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:17:20.915147 kubelet[2724]: I0517 00:17:20.915120 2724 reconciler.go:26] "Reconciler: start to sync state" May 17 00:17:20.917948 kubelet[2724]: E0517 00:17:20.917930 2724 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:17:20.918760 kubelet[2724]: I0517 00:17:20.918737 2724 factory.go:221] Registration of the systemd container factory successfully May 17 00:17:20.918839 kubelet[2724]: I0517 00:17:20.918820 2724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:17:20.923298 kubelet[2724]: E0517 00:17:20.922797 2724 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:17:20.925682 kubelet[2724]: I0517 00:17:20.924233 2724 factory.go:221] Registration of the containerd container factory successfully May 17 00:17:20.929609 kubelet[2724]: I0517 00:17:20.929555 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:17:20.930731 kubelet[2724]: I0517 00:17:20.930703 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:17:20.930731 kubelet[2724]: I0517 00:17:20.930723 2724 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:17:20.930826 kubelet[2724]: I0517 00:17:20.930743 2724 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:17:20.930826 kubelet[2724]: E0517 00:17:20.930788 2724 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:17:20.975971 kubelet[2724]: I0517 00:17:20.975933 2724 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:17:20.975971 kubelet[2724]: I0517 00:17:20.975954 2724 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:17:20.975971 kubelet[2724]: I0517 00:17:20.975977 2724 state_mem.go:36] "Initialized new in-memory state store" May 17 00:17:20.976162 kubelet[2724]: I0517 00:17:20.976153 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:17:20.976183 kubelet[2724]: I0517 00:17:20.976164 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:17:20.976212 kubelet[2724]: I0517 00:17:20.976184 2724 policy_none.go:49] "None policy: Start" May 17 00:17:20.976971 kubelet[2724]: I0517 00:17:20.976946 2724 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:17:20.977006 kubelet[2724]: I0517 00:17:20.976978 2724 state_mem.go:35] "Initializing new in-memory state store" May 17 00:17:20.977237 kubelet[2724]: I0517 00:17:20.977210 2724 state_mem.go:75] "Updated machine memory state" May 17 00:17:20.978774 kubelet[2724]: I0517 00:17:20.978747 2724 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:17:20.978947 kubelet[2724]: I0517 00:17:20.978925 2724 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:17:20.978973 kubelet[2724]: I0517 00:17:20.978942 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:17:20.980950 kubelet[2724]: I0517 00:17:20.979751 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:17:21.039069 kubelet[2724]: E0517 00:17:21.039005 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 17 00:17:21.039274 kubelet[2724]: E0517 00:17:21.039235 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:17:21.083725 kubelet[2724]: I0517 00:17:21.083687 2724 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:17:21.089592 kubelet[2724]: I0517 00:17:21.089556 2724 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 17 00:17:21.089692 kubelet[2724]: I0517 00:17:21.089630 2724 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:17:21.216156 kubelet[2724]: I0517 00:17:21.216015 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79304e1e0e8e6bad9a40772a89516b1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79304e1e0e8e6bad9a40772a89516b1b\") " pod="kube-system/kube-apiserver-localhost" May 17 00:17:21.216156 kubelet[2724]: I0517 00:17:21.216088 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:21.216156 kubelet[2724]: I0517 00:17:21.216134 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:21.216326 kubelet[2724]: I0517 00:17:21.216196 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79304e1e0e8e6bad9a40772a89516b1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79304e1e0e8e6bad9a40772a89516b1b\") " pod="kube-system/kube-apiserver-localhost" May 17 00:17:21.216326 kubelet[2724]: I0517 00:17:21.216231 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79304e1e0e8e6bad9a40772a89516b1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79304e1e0e8e6bad9a40772a89516b1b\") " pod="kube-system/kube-apiserver-localhost" May 17 00:17:21.216326 kubelet[2724]: I0517 00:17:21.216271 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:21.216326 kubelet[2724]: I0517 00:17:21.216287 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:21.216326 kubelet[2724]: I0517 00:17:21.216303 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:17:21.216461 kubelet[2724]: I0517 00:17:21.216317 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:17:21.339391 kubelet[2724]: E0517 00:17:21.339356 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:21.339500 kubelet[2724]: E0517 00:17:21.339446 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:21.339651 kubelet[2724]: E0517 00:17:21.339576 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:21.906777 kubelet[2724]: I0517 00:17:21.906728 2724 apiserver.go:52] "Watching apiserver" May 17 00:17:21.915910 kubelet[2724]: I0517 00:17:21.915877 2724 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:17:21.947303 kubelet[2724]: E0517 00:17:21.947217 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:21.947514 kubelet[2724]: E0517 00:17:21.947491 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:21.968370 kubelet[2724]: I0517 00:17:21.968308 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.968291958 podStartE2EDuration="1.968291958s" podCreationTimestamp="2025-05-17 00:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:17:21.968178457 +0000 UTC m=+1.128949152" watchObservedRunningTime="2025-05-17 00:17:21.968291958 +0000 UTC m=+1.129062643" May 17 00:17:22.106637 kubelet[2724]: E0517 00:17:22.106371 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:17:22.106637 kubelet[2724]: E0517 00:17:22.106550 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:22.205237 kubelet[2724]: I0517 00:17:22.205054 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.205036559 podStartE2EDuration="2.205036559s" podCreationTimestamp="2025-05-17 00:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:17:22.204660672 +0000 UTC m=+1.365431388" watchObservedRunningTime="2025-05-17 00:17:22.205036559 +0000 UTC m=+1.365807264" May 17 00:17:22.205237 kubelet[2724]: I0517 00:17:22.205141 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.205135853 podStartE2EDuration="1.205135853s" podCreationTimestamp="2025-05-17 00:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:17:22.167479092 +0000 UTC m=+1.328249807" watchObservedRunningTime="2025-05-17 00:17:22.205135853 +0000 UTC m=+1.365906548" May 17 00:17:23.169875 kubelet[2724]: E0517 00:17:22.948116 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:26.210530 kubelet[2724]: I0517 00:17:26.210497 2724 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:17:26.210966 containerd[1579]: time="2025-05-17T00:17:26.210930027Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:17:26.211239 kubelet[2724]: I0517 00:17:26.211128 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:17:26.243590 kubelet[2724]: I0517 00:17:26.243525 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7h4r\" (UniqueName: \"kubernetes.io/projected/5f1a2f65-522b-4ce6-af2a-4f947a98dc34-kube-api-access-h7h4r\") pod \"kube-proxy-wcqtd\" (UID: \"5f1a2f65-522b-4ce6-af2a-4f947a98dc34\") " pod="kube-system/kube-proxy-wcqtd" May 17 00:17:26.243590 kubelet[2724]: I0517 00:17:26.243594 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f1a2f65-522b-4ce6-af2a-4f947a98dc34-kube-proxy\") pod \"kube-proxy-wcqtd\" (UID: \"5f1a2f65-522b-4ce6-af2a-4f947a98dc34\") " pod="kube-system/kube-proxy-wcqtd" May 17 00:17:26.243806 kubelet[2724]: I0517 00:17:26.243620 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f1a2f65-522b-4ce6-af2a-4f947a98dc34-lib-modules\") pod \"kube-proxy-wcqtd\" (UID: \"5f1a2f65-522b-4ce6-af2a-4f947a98dc34\") " pod="kube-system/kube-proxy-wcqtd" May 17 00:17:26.243806 kubelet[2724]: I0517 00:17:26.243639 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f1a2f65-522b-4ce6-af2a-4f947a98dc34-xtables-lock\") pod \"kube-proxy-wcqtd\" (UID: \"5f1a2f65-522b-4ce6-af2a-4f947a98dc34\") " pod="kube-system/kube-proxy-wcqtd" May 17 00:17:26.349106 kubelet[2724]: E0517 00:17:26.349068 2724 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:17:26.349106 kubelet[2724]: E0517 00:17:26.349105 2724 projected.go:194] Error preparing data for projected volume kube-api-access-h7h4r for pod kube-system/kube-proxy-wcqtd: configmap "kube-root-ca.crt" not found May 17 00:17:26.349294 kubelet[2724]: E0517 00:17:26.349169 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f1a2f65-522b-4ce6-af2a-4f947a98dc34-kube-api-access-h7h4r podName:5f1a2f65-522b-4ce6-af2a-4f947a98dc34 nodeName:}" failed. No retries permitted until 2025-05-17 00:17:26.849146102 +0000 UTC m=+6.009916797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h7h4r" (UniqueName: "kubernetes.io/projected/5f1a2f65-522b-4ce6-af2a-4f947a98dc34-kube-api-access-h7h4r") pod "kube-proxy-wcqtd" (UID: "5f1a2f65-522b-4ce6-af2a-4f947a98dc34") : configmap "kube-root-ca.crt" not found May 17 00:17:26.950185 kubelet[2724]: E0517 00:17:26.950151 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:27.086446 kubelet[2724]: E0517 00:17:27.085807 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:27.090466 containerd[1579]: time="2025-05-17T00:17:27.086834840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wcqtd,Uid:5f1a2f65-522b-4ce6-af2a-4f947a98dc34,Namespace:kube-system,Attempt:0,}" May 17 00:17:27.164870 containerd[1579]: time="2025-05-17T00:17:27.164723148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:27.164870 containerd[1579]: time="2025-05-17T00:17:27.164800242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:27.164870 containerd[1579]: time="2025-05-17T00:17:27.164811763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:27.165047 containerd[1579]: time="2025-05-17T00:17:27.164938269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:27.220167 containerd[1579]: time="2025-05-17T00:17:27.220069605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wcqtd,Uid:5f1a2f65-522b-4ce6-af2a-4f947a98dc34,Namespace:kube-system,Attempt:0,} returns sandbox id \"b08a26810e70b0e3b585f78fd1a4a2d3676e37e686c1403d8a3358b537d8b8c9\"" May 17 00:17:27.221485 kubelet[2724]: E0517 00:17:27.221439 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:27.223414 containerd[1579]: time="2025-05-17T00:17:27.223386582Z" level=info msg="CreateContainer within sandbox \"b08a26810e70b0e3b585f78fd1a4a2d3676e37e686c1403d8a3358b537d8b8c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:17:27.239464 containerd[1579]: time="2025-05-17T00:17:27.239410946Z" level=info msg="CreateContainer within sandbox \"b08a26810e70b0e3b585f78fd1a4a2d3676e37e686c1403d8a3358b537d8b8c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f72af55d71122b179039489b9d05b464eb2b8d62fbfeaabce952e88e94573d2\"" May 17 00:17:27.239935 containerd[1579]: time="2025-05-17T00:17:27.239891119Z" level=info msg="StartContainer for \"0f72af55d71122b179039489b9d05b464eb2b8d62fbfeaabce952e88e94573d2\"" May 17 00:17:27.300878 containerd[1579]: time="2025-05-17T00:17:27.300831466Z" level=info msg="StartContainer for \"0f72af55d71122b179039489b9d05b464eb2b8d62fbfeaabce952e88e94573d2\" returns successfully" May 17 00:17:27.361315 kubelet[2724]: I0517 00:17:27.361236 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhwls\" (UniqueName: \"kubernetes.io/projected/5a60a48e-81c9-4231-a634-8b1ce6aa3457-kube-api-access-mhwls\") pod \"tigera-operator-7c5755cdcb-9cpg9\" (UID: \"5a60a48e-81c9-4231-a634-8b1ce6aa3457\") " pod="tigera-operator/tigera-operator-7c5755cdcb-9cpg9" May 17 00:17:27.361450 kubelet[2724]: I0517 00:17:27.361365 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a60a48e-81c9-4231-a634-8b1ce6aa3457-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-9cpg9\" (UID: \"5a60a48e-81c9-4231-a634-8b1ce6aa3457\") " pod="tigera-operator/tigera-operator-7c5755cdcb-9cpg9" May 17 00:17:27.503940 containerd[1579]: time="2025-05-17T00:17:27.503798836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-9cpg9,Uid:5a60a48e-81c9-4231-a634-8b1ce6aa3457,Namespace:tigera-operator,Attempt:0,}" May 17 00:17:27.636175 containerd[1579]: time="2025-05-17T00:17:27.636071823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:27.636175 containerd[1579]: time="2025-05-17T00:17:27.636149006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:27.636343 containerd[1579]: time="2025-05-17T00:17:27.636163753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:27.636516 containerd[1579]: time="2025-05-17T00:17:27.636470814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:27.692579 containerd[1579]: time="2025-05-17T00:17:27.692539925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-9cpg9,Uid:5a60a48e-81c9-4231-a634-8b1ce6aa3457,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a1c4a2ee3283299b53c09dcac5f3fdf9a3f5362d74391d600704e4249602dfac\"" May 17 00:17:27.694001 containerd[1579]: time="2025-05-17T00:17:27.693967779Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:17:27.960530 kubelet[2724]: E0517 00:17:27.960502 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:27.967980 kubelet[2724]: I0517 00:17:27.967903 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wcqtd" podStartSLOduration=1.9678865399999999 podStartE2EDuration="1.96788654s" podCreationTimestamp="2025-05-17 00:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:17:27.967556106 +0000 UTC m=+7.128326801" watchObservedRunningTime="2025-05-17 00:17:27.96788654 +0000 UTC m=+7.128657235" May 17 00:17:28.559911 kubelet[2724]: E0517 00:17:28.559856 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:28.962553 kubelet[2724]: E0517 00:17:28.962511 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:29.479947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824994088.mount: Deactivated successfully. May 17 00:17:30.145517 containerd[1579]: time="2025-05-17T00:17:30.145471529Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:30.146212 containerd[1579]: time="2025-05-17T00:17:30.146154349Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:17:30.147345 containerd[1579]: time="2025-05-17T00:17:30.147313549Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:30.152423 containerd[1579]: time="2025-05-17T00:17:30.152373731Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:30.153130 containerd[1579]: time="2025-05-17T00:17:30.153089073Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.45908667s" May 17 00:17:30.153164 containerd[1579]: time="2025-05-17T00:17:30.153127665Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:17:30.154920 containerd[1579]: time="2025-05-17T00:17:30.154896620Z" level=info msg="CreateContainer within sandbox \"a1c4a2ee3283299b53c09dcac5f3fdf9a3f5362d74391d600704e4249602dfac\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:17:30.168729 containerd[1579]: time="2025-05-17T00:17:30.168681439Z" level=info msg="CreateContainer within sandbox \"a1c4a2ee3283299b53c09dcac5f3fdf9a3f5362d74391d600704e4249602dfac\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5a22e38e6523b2f2dce6b888d9acb5645ed0d3ea8df65e01ff671675a5c12d5c\"" May 17 00:17:30.169207 containerd[1579]: time="2025-05-17T00:17:30.169175831Z" level=info msg="StartContainer for \"5a22e38e6523b2f2dce6b888d9acb5645ed0d3ea8df65e01ff671675a5c12d5c\"" May 17 00:17:30.221177 containerd[1579]: time="2025-05-17T00:17:30.221135580Z" level=info msg="StartContainer for \"5a22e38e6523b2f2dce6b888d9acb5645ed0d3ea8df65e01ff671675a5c12d5c\" returns successfully" May 17 00:17:30.769124 kubelet[2724]: E0517 00:17:30.769046 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:30.966646 kubelet[2724]: E0517 00:17:30.966595 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:30.981489 kubelet[2724]: I0517 00:17:30.981395 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-9cpg9" podStartSLOduration=1.521049181 podStartE2EDuration="3.981374326s" podCreationTimestamp="2025-05-17 00:17:27 +0000 UTC" firstStartedPulling="2025-05-17 00:17:27.693526969 +0000 UTC m=+6.854297664" lastFinishedPulling="2025-05-17 00:17:30.153852114 +0000 UTC m=+9.314622809" observedRunningTime="2025-05-17 00:17:30.981328571 +0000 UTC m=+10.142099266" watchObservedRunningTime="2025-05-17 00:17:30.981374326 +0000 UTC m=+10.142145022" May 17 00:17:36.011375 sudo[1769]: pam_unix(sudo:session): session closed for user root May 17 00:17:36.017204 sshd[1762]: pam_unix(sshd:session): session closed for user core May 17 00:17:36.024574 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:38268.service: Deactivated successfully. May 17 00:17:36.028284 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. May 17 00:17:36.028842 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:17:36.031287 systemd-logind[1564]: Removed session 7. May 17 00:17:36.954549 kubelet[2724]: E0517 00:17:36.954214 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:38.546511 kubelet[2724]: I0517 00:17:38.546444 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9706609a-b0a7-436f-9819-8818d99c9a7e-tigera-ca-bundle\") pod \"calico-typha-7cf466d46b-w57lk\" (UID: \"9706609a-b0a7-436f-9819-8818d99c9a7e\") " pod="calico-system/calico-typha-7cf466d46b-w57lk" May 17 00:17:38.546511 kubelet[2724]: I0517 00:17:38.546489 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9706609a-b0a7-436f-9819-8818d99c9a7e-typha-certs\") pod \"calico-typha-7cf466d46b-w57lk\" (UID: \"9706609a-b0a7-436f-9819-8818d99c9a7e\") " pod="calico-system/calico-typha-7cf466d46b-w57lk" May 17 00:17:38.546511 kubelet[2724]: I0517 00:17:38.546513 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqkfg\" (UniqueName: \"kubernetes.io/projected/9706609a-b0a7-436f-9819-8818d99c9a7e-kube-api-access-sqkfg\") pod \"calico-typha-7cf466d46b-w57lk\" (UID: \"9706609a-b0a7-436f-9819-8818d99c9a7e\") " pod="calico-system/calico-typha-7cf466d46b-w57lk" May 17 00:17:38.690750 kubelet[2724]: E0517 00:17:38.690718 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:38.695350 containerd[1579]: time="2025-05-17T00:17:38.695301761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf466d46b-w57lk,Uid:9706609a-b0a7-436f-9819-8818d99c9a7e,Namespace:calico-system,Attempt:0,}" May 17 00:17:38.722320 containerd[1579]: time="2025-05-17T00:17:38.722050148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:38.722320 containerd[1579]: time="2025-05-17T00:17:38.722160814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:38.722320 containerd[1579]: time="2025-05-17T00:17:38.722174109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:38.722320 containerd[1579]: time="2025-05-17T00:17:38.722279746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:38.771277 containerd[1579]: time="2025-05-17T00:17:38.770918897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf466d46b-w57lk,Uid:9706609a-b0a7-436f-9819-8818d99c9a7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"da4ee6a9ff2cd5773642ff26db68735e75e46fd6e39239cad03dd6d3cc29942c\"" May 17 00:17:38.775604 kubelet[2724]: E0517 00:17:38.775473 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:38.781532 containerd[1579]: time="2025-05-17T00:17:38.781496733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:17:38.948542 kubelet[2724]: I0517 00:17:38.948490 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-var-lib-calico\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948542 kubelet[2724]: I0517 00:17:38.948526 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-cni-log-dir\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948713 kubelet[2724]: I0517 00:17:38.948547 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/58f07b6e-2931-4d7e-9910-c32db2e24195-node-certs\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948713 kubelet[2724]: I0517 00:17:38.948604 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-var-run-calico\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948713 kubelet[2724]: I0517 00:17:38.948623 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-lib-modules\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948713 kubelet[2724]: I0517 00:17:38.948638 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-xtables-lock\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948713 kubelet[2724]: I0517 00:17:38.948658 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88h4p\" (UniqueName: \"kubernetes.io/projected/58f07b6e-2931-4d7e-9910-c32db2e24195-kube-api-access-88h4p\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948833 kubelet[2724]: I0517 00:17:38.948684 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-cni-net-dir\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948833 kubelet[2724]: I0517 00:17:38.948702 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-flexvol-driver-host\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948833 kubelet[2724]: I0517 00:17:38.948722 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f07b6e-2931-4d7e-9910-c32db2e24195-tigera-ca-bundle\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948833 kubelet[2724]: I0517 00:17:38.948742 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-cni-bin-dir\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:38.948833 kubelet[2724]: I0517 00:17:38.948760 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/58f07b6e-2931-4d7e-9910-c32db2e24195-policysync\") pod \"calico-node-lnvr4\" (UID: \"58f07b6e-2931-4d7e-9910-c32db2e24195\") " pod="calico-system/calico-node-lnvr4" May 17 00:17:39.051221 kubelet[2724]: E0517 00:17:39.051187 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.051221 kubelet[2724]: W0517 00:17:39.051216 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.051411 kubelet[2724]: E0517 00:17:39.051265 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.053032 kubelet[2724]: E0517 00:17:39.052961 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.053032 kubelet[2724]: W0517 00:17:39.052981 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.053032 kubelet[2724]: E0517 00:17:39.053000 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.058573 kubelet[2724]: E0517 00:17:39.058547 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.058573 kubelet[2724]: W0517 00:17:39.058571 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.058675 kubelet[2724]: E0517 00:17:39.058592 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.075705 kubelet[2724]: E0517 00:17:39.075431 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdx7n" podUID="5d2460d1-6b11-4f05-a6fd-bf4b83ac6776" May 17 00:17:39.094140 containerd[1579]: time="2025-05-17T00:17:39.094095475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lnvr4,Uid:58f07b6e-2931-4d7e-9910-c32db2e24195,Namespace:calico-system,Attempt:0,}" May 17 00:17:39.121969 containerd[1579]: time="2025-05-17T00:17:39.121840798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:39.122165 containerd[1579]: time="2025-05-17T00:17:39.121938710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:39.122165 containerd[1579]: time="2025-05-17T00:17:39.121951584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:39.122165 containerd[1579]: time="2025-05-17T00:17:39.122058975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:39.150127 kubelet[2724]: E0517 00:17:39.150086 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.150127 kubelet[2724]: W0517 00:17:39.150110 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.150127 kubelet[2724]: E0517 00:17:39.150127 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.150507 kubelet[2724]: E0517 00:17:39.150485 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.150507 kubelet[2724]: W0517 00:17:39.150498 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.150507 kubelet[2724]: E0517 00:17:39.150507 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.150722 kubelet[2724]: E0517 00:17:39.150712 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.150722 kubelet[2724]: W0517 00:17:39.150719 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.150799 kubelet[2724]: E0517 00:17:39.150729 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.151116 kubelet[2724]: E0517 00:17:39.151090 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.151116 kubelet[2724]: W0517 00:17:39.151108 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.151116 kubelet[2724]: E0517 00:17:39.151118 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.151402 kubelet[2724]: E0517 00:17:39.151380 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.151455 kubelet[2724]: W0517 00:17:39.151421 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.151455 kubelet[2724]: E0517 00:17:39.151432 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.152020 kubelet[2724]: E0517 00:17:39.151681 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.152020 kubelet[2724]: W0517 00:17:39.151713 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.152020 kubelet[2724]: E0517 00:17:39.151744 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.152840 kubelet[2724]: E0517 00:17:39.152617 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.152840 kubelet[2724]: W0517 00:17:39.152646 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.152840 kubelet[2724]: E0517 00:17:39.152674 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.154808 kubelet[2724]: E0517 00:17:39.154033 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.154808 kubelet[2724]: W0517 00:17:39.154048 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.154808 kubelet[2724]: E0517 00:17:39.154059 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.155074 kubelet[2724]: E0517 00:17:39.155055 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.155074 kubelet[2724]: W0517 00:17:39.155071 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.155133 kubelet[2724]: E0517 00:17:39.155082 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.155378 kubelet[2724]: E0517 00:17:39.155359 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.155378 kubelet[2724]: W0517 00:17:39.155373 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.155448 kubelet[2724]: E0517 00:17:39.155383 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.155644 kubelet[2724]: E0517 00:17:39.155627 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.155644 kubelet[2724]: W0517 00:17:39.155640 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.155689 kubelet[2724]: E0517 00:17:39.155650 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.155868 kubelet[2724]: E0517 00:17:39.155849 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.155868 kubelet[2724]: W0517 00:17:39.155863 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.155921 kubelet[2724]: E0517 00:17:39.155874 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.156679 kubelet[2724]: E0517 00:17:39.156131 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.156679 kubelet[2724]: W0517 00:17:39.156144 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.156679 kubelet[2724]: E0517 00:17:39.156154 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.156679 kubelet[2724]: E0517 00:17:39.156461 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.156679 kubelet[2724]: W0517 00:17:39.156469 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.156679 kubelet[2724]: E0517 00:17:39.156479 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.156874 kubelet[2724]: E0517 00:17:39.156730 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.156874 kubelet[2724]: W0517 00:17:39.156740 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.156874 kubelet[2724]: E0517 00:17:39.156750 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.157182 kubelet[2724]: E0517 00:17:39.157155 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.157182 kubelet[2724]: W0517 00:17:39.157178 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.157243 kubelet[2724]: E0517 00:17:39.157188 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.157622 kubelet[2724]: E0517 00:17:39.157494 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.157622 kubelet[2724]: W0517 00:17:39.157512 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.157622 kubelet[2724]: E0517 00:17:39.157529 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.157937 kubelet[2724]: E0517 00:17:39.157843 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.157937 kubelet[2724]: W0517 00:17:39.157854 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.157937 kubelet[2724]: E0517 00:17:39.157863 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.158126 kubelet[2724]: E0517 00:17:39.158116 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.158175 kubelet[2724]: W0517 00:17:39.158166 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.158218 kubelet[2724]: E0517 00:17:39.158210 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.158478 kubelet[2724]: E0517 00:17:39.158468 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.158696 kubelet[2724]: W0517 00:17:39.158517 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.158696 kubelet[2724]: E0517 00:17:39.158529 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.158832 kubelet[2724]: E0517 00:17:39.158821 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.158881 kubelet[2724]: W0517 00:17:39.158872 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.158923 kubelet[2724]: E0517 00:17:39.158915 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.158990 kubelet[2724]: I0517 00:17:39.158979 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d2460d1-6b11-4f05-a6fd-bf4b83ac6776-registration-dir\") pod \"csi-node-driver-cdx7n\" (UID: \"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776\") " pod="calico-system/csi-node-driver-cdx7n" May 17 00:17:39.159325 kubelet[2724]: E0517 00:17:39.159312 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.159500 kubelet[2724]: W0517 00:17:39.159406 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.159500 kubelet[2724]: E0517 00:17:39.159424 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.159644 kubelet[2724]: E0517 00:17:39.159634 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.159693 kubelet[2724]: W0517 00:17:39.159685 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.159791 kubelet[2724]: E0517 00:17:39.159744 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.160035 kubelet[2724]: E0517 00:17:39.160002 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.160035 kubelet[2724]: W0517 00:17:39.160013 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.160035 kubelet[2724]: E0517 00:17:39.160022 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.160274 kubelet[2724]: I0517 00:17:39.160137 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjb48\" (UniqueName: \"kubernetes.io/projected/5d2460d1-6b11-4f05-a6fd-bf4b83ac6776-kube-api-access-gjb48\") pod \"csi-node-driver-cdx7n\" (UID: \"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776\") " pod="calico-system/csi-node-driver-cdx7n" May 17 00:17:39.160463 kubelet[2724]: E0517 00:17:39.160394 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.160463 kubelet[2724]: W0517 00:17:39.160405 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.160463 kubelet[2724]: E0517 00:17:39.160421 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.160463 kubelet[2724]: I0517 00:17:39.160435 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d2460d1-6b11-4f05-a6fd-bf4b83ac6776-kubelet-dir\") pod \"csi-node-driver-cdx7n\" (UID: \"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776\") " pod="calico-system/csi-node-driver-cdx7n" May 17 00:17:39.161032 kubelet[2724]: E0517 00:17:39.161002 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.161032 kubelet[2724]: W0517 00:17:39.161023 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.161087 kubelet[2724]: E0517 00:17:39.161043 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.161619 kubelet[2724]: E0517 00:17:39.161595 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.161619 kubelet[2724]: W0517 00:17:39.161610 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.161690 kubelet[2724]: E0517 00:17:39.161626 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.161713 containerd[1579]: time="2025-05-17T00:17:39.161645939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lnvr4,Uid:58f07b6e-2931-4d7e-9910-c32db2e24195,Namespace:calico-system,Attempt:0,} returns sandbox id \"d32255ed28515af878163517a78d25892d1844924a8f83d63f67ea084871be01\"" May 17 00:17:39.161883 kubelet[2724]: E0517 00:17:39.161863 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.161883 kubelet[2724]: W0517 00:17:39.161878 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.161943 kubelet[2724]: E0517 00:17:39.161888 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.161943 kubelet[2724]: I0517 00:17:39.161914 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5d2460d1-6b11-4f05-a6fd-bf4b83ac6776-varrun\") pod \"csi-node-driver-cdx7n\" (UID: \"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776\") " pod="calico-system/csi-node-driver-cdx7n" May 17 00:17:39.162227 kubelet[2724]: E0517 00:17:39.162144 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.162227 kubelet[2724]: W0517 00:17:39.162156 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.162227 kubelet[2724]: E0517 00:17:39.162190 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.162326 kubelet[2724]: I0517 00:17:39.162229 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d2460d1-6b11-4f05-a6fd-bf4b83ac6776-socket-dir\") pod \"csi-node-driver-cdx7n\" (UID: \"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776\") " pod="calico-system/csi-node-driver-cdx7n" May 17 00:17:39.162541 kubelet[2724]: E0517 00:17:39.162423 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.162541 kubelet[2724]: W0517 00:17:39.162436 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.162541 kubelet[2724]: E0517 00:17:39.162479 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.162666 kubelet[2724]: E0517 00:17:39.162646 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.162666 kubelet[2724]: W0517 00:17:39.162658 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.162744 kubelet[2724]: E0517 00:17:39.162673 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.162906 kubelet[2724]: E0517 00:17:39.162883 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.162906 kubelet[2724]: W0517 00:17:39.162895 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.162956 kubelet[2724]: E0517 00:17:39.162908 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.163108 kubelet[2724]: E0517 00:17:39.163096 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.163108 kubelet[2724]: W0517 00:17:39.163106 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.163167 kubelet[2724]: E0517 00:17:39.163116 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.163845 kubelet[2724]: E0517 00:17:39.163827 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.163845 kubelet[2724]: W0517 00:17:39.163843 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.163911 kubelet[2724]: E0517 00:17:39.163856 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.164068 kubelet[2724]: E0517 00:17:39.164054 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.164068 kubelet[2724]: W0517 00:17:39.164065 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.164134 kubelet[2724]: E0517 00:17:39.164075 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.263289 kubelet[2724]: E0517 00:17:39.263147 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.263289 kubelet[2724]: W0517 00:17:39.263170 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.263289 kubelet[2724]: E0517 00:17:39.263192 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.263787 kubelet[2724]: E0517 00:17:39.263684 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.263787 kubelet[2724]: W0517 00:17:39.263714 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.263787 kubelet[2724]: E0517 00:17:39.263731 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.264241 kubelet[2724]: E0517 00:17:39.264206 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.264241 kubelet[2724]: W0517 00:17:39.264234 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.264307 kubelet[2724]: E0517 00:17:39.264283 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.265066 kubelet[2724]: E0517 00:17:39.265039 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.265066 kubelet[2724]: W0517 00:17:39.265054 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.265147 kubelet[2724]: E0517 00:17:39.265070 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.265387 kubelet[2724]: E0517 00:17:39.265290 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.265387 kubelet[2724]: W0517 00:17:39.265302 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.265454 kubelet[2724]: E0517 00:17:39.265378 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.265767 kubelet[2724]: E0517 00:17:39.265632 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.265767 kubelet[2724]: W0517 00:17:39.265651 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.265767 kubelet[2724]: E0517 00:17:39.265700 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.266399 kubelet[2724]: E0517 00:17:39.266382 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.266399 kubelet[2724]: W0517 00:17:39.266397 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.266545 kubelet[2724]: E0517 00:17:39.266460 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.266631 kubelet[2724]: E0517 00:17:39.266617 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.266631 kubelet[2724]: W0517 00:17:39.266629 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.266795 kubelet[2724]: E0517 00:17:39.266645 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.267714 kubelet[2724]: E0517 00:17:39.267116 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.267714 kubelet[2724]: W0517 00:17:39.267144 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.267714 kubelet[2724]: E0517 00:17:39.267177 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.267714 kubelet[2724]: E0517 00:17:39.267415 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.267714 kubelet[2724]: W0517 00:17:39.267422 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.267714 kubelet[2724]: E0517 00:17:39.267508 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.267714 kubelet[2724]: E0517 00:17:39.267645 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.267714 kubelet[2724]: W0517 00:17:39.267653 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.267977 kubelet[2724]: E0517 00:17:39.267713 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.267977 kubelet[2724]: E0517 00:17:39.267875 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.267977 kubelet[2724]: W0517 00:17:39.267882 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.267977 kubelet[2724]: E0517 00:17:39.267953 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.268095 kubelet[2724]: E0517 00:17:39.268080 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.268095 kubelet[2724]: W0517 00:17:39.268093 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.268142 kubelet[2724]: E0517 00:17:39.268104 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.268405 kubelet[2724]: E0517 00:17:39.268380 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.268405 kubelet[2724]: W0517 00:17:39.268396 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.268405 kubelet[2724]: E0517 00:17:39.268417 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.268700 kubelet[2724]: E0517 00:17:39.268686 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.268700 kubelet[2724]: W0517 00:17:39.268699 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.268753 kubelet[2724]: E0517 00:17:39.268715 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.269008 kubelet[2724]: E0517 00:17:39.268989 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.269008 kubelet[2724]: W0517 00:17:39.269003 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.269089 kubelet[2724]: E0517 00:17:39.269015 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.269363 kubelet[2724]: E0517 00:17:39.269347 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.269363 kubelet[2724]: W0517 00:17:39.269361 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.269413 kubelet[2724]: E0517 00:17:39.269376 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.269627 kubelet[2724]: E0517 00:17:39.269599 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.269627 kubelet[2724]: W0517 00:17:39.269626 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.269688 kubelet[2724]: E0517 00:17:39.269657 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.269908 kubelet[2724]: E0517 00:17:39.269879 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.269908 kubelet[2724]: W0517 00:17:39.269901 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.270042 kubelet[2724]: E0517 00:17:39.269996 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.270129 kubelet[2724]: E0517 00:17:39.270112 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.270129 kubelet[2724]: W0517 00:17:39.270125 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.270417 kubelet[2724]: E0517 00:17:39.270312 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.270417 kubelet[2724]: E0517 00:17:39.270331 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.270417 kubelet[2724]: W0517 00:17:39.270347 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.270607 kubelet[2724]: E0517 00:17:39.270579 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.270607 kubelet[2724]: W0517 00:17:39.270597 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.270607 kubelet[2724]: E0517 00:17:39.270610 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.270932 kubelet[2724]: E0517 00:17:39.270906 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.270932 kubelet[2724]: W0517 00:17:39.270919 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.270932 kubelet[2724]: E0517 00:17:39.270928 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.271016 kubelet[2724]: E0517 00:17:39.270992 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.272182 kubelet[2724]: E0517 00:17:39.272150 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.272182 kubelet[2724]: W0517 00:17:39.272173 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.272298 kubelet[2724]: E0517 00:17:39.272196 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.272522 kubelet[2724]: E0517 00:17:39.272507 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.272522 kubelet[2724]: W0517 00:17:39.272518 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.272583 kubelet[2724]: E0517 00:17:39.272528 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:39.274484 kubelet[2724]: E0517 00:17:39.274465 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:39.274484 kubelet[2724]: W0517 00:17:39.274480 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:39.274556 kubelet[2724]: E0517 00:17:39.274493 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:40.216686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000397404.mount: Deactivated successfully. May 17 00:17:40.552612 containerd[1579]: time="2025-05-17T00:17:40.552481355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:40.553505 containerd[1579]: time="2025-05-17T00:17:40.553449484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:17:40.554715 containerd[1579]: time="2025-05-17T00:17:40.554675145Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:40.556745 containerd[1579]: time="2025-05-17T00:17:40.556701211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:40.557327 containerd[1579]: time="2025-05-17T00:17:40.557274772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 1.775738215s" May 17 00:17:40.557327 containerd[1579]: time="2025-05-17T00:17:40.557314236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:17:40.558833 containerd[1579]: time="2025-05-17T00:17:40.558761160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:17:40.578672 containerd[1579]: time="2025-05-17T00:17:40.578629085Z" level=info msg="CreateContainer within sandbox \"da4ee6a9ff2cd5773642ff26db68735e75e46fd6e39239cad03dd6d3cc29942c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:17:40.591913 containerd[1579]: time="2025-05-17T00:17:40.591869747Z" level=info msg="CreateContainer within sandbox \"da4ee6a9ff2cd5773642ff26db68735e75e46fd6e39239cad03dd6d3cc29942c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d7de77b793c7c6796b943fb9ca30edef96ed1d964b8646501a77465c2a62bfbb\"" May 17 00:17:40.595186 containerd[1579]: time="2025-05-17T00:17:40.595146490Z" level=info msg="StartContainer for \"d7de77b793c7c6796b943fb9ca30edef96ed1d964b8646501a77465c2a62bfbb\"" May 17 00:17:40.671450 containerd[1579]: time="2025-05-17T00:17:40.671401503Z" level=info msg="StartContainer for \"d7de77b793c7c6796b943fb9ca30edef96ed1d964b8646501a77465c2a62bfbb\" returns successfully" May 17 00:17:40.932238 kubelet[2724]: E0517 00:17:40.932181 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdx7n" podUID="5d2460d1-6b11-4f05-a6fd-bf4b83ac6776" May 17 00:17:40.994096 kubelet[2724]: E0517 00:17:40.994045 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:41.070682 kubelet[2724]: E0517 00:17:41.070619 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.070682 kubelet[2724]: W0517 00:17:41.070657 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.070682 kubelet[2724]: E0517 00:17:41.070689 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.071060 kubelet[2724]: E0517 00:17:41.071037 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.071060 kubelet[2724]: W0517 00:17:41.071051 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.071060 kubelet[2724]: E0517 00:17:41.071060 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.071276 kubelet[2724]: E0517 00:17:41.071242 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.071276 kubelet[2724]: W0517 00:17:41.071274 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.071357 kubelet[2724]: E0517 00:17:41.071282 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.071516 kubelet[2724]: E0517 00:17:41.071498 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.071546 kubelet[2724]: W0517 00:17:41.071515 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.071546 kubelet[2724]: E0517 00:17:41.071528 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.071887 kubelet[2724]: E0517 00:17:41.071863 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.071887 kubelet[2724]: W0517 00:17:41.071877 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.071887 kubelet[2724]: E0517 00:17:41.071885 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.072085 kubelet[2724]: E0517 00:17:41.072066 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.072085 kubelet[2724]: W0517 00:17:41.072077 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.072085 kubelet[2724]: E0517 00:17:41.072084 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.072281 kubelet[2724]: E0517 00:17:41.072269 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.072281 kubelet[2724]: W0517 00:17:41.072279 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.072341 kubelet[2724]: E0517 00:17:41.072295 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.072473 kubelet[2724]: E0517 00:17:41.072460 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.072473 kubelet[2724]: W0517 00:17:41.072471 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.072512 kubelet[2724]: E0517 00:17:41.072478 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.072656 kubelet[2724]: E0517 00:17:41.072644 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.072656 kubelet[2724]: W0517 00:17:41.072654 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.072710 kubelet[2724]: E0517 00:17:41.072661 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.072842 kubelet[2724]: E0517 00:17:41.072829 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.072842 kubelet[2724]: W0517 00:17:41.072839 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.072902 kubelet[2724]: E0517 00:17:41.072848 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.073064 kubelet[2724]: E0517 00:17:41.073048 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.073064 kubelet[2724]: W0517 00:17:41.073061 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.073131 kubelet[2724]: E0517 00:17:41.073074 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.073284 kubelet[2724]: E0517 00:17:41.073271 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.073284 kubelet[2724]: W0517 00:17:41.073282 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.073340 kubelet[2724]: E0517 00:17:41.073298 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.073525 kubelet[2724]: E0517 00:17:41.073512 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.073525 kubelet[2724]: W0517 00:17:41.073522 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.073579 kubelet[2724]: E0517 00:17:41.073531 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.073748 kubelet[2724]: E0517 00:17:41.073734 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.073748 kubelet[2724]: W0517 00:17:41.073744 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.073801 kubelet[2724]: E0517 00:17:41.073754 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.073951 kubelet[2724]: E0517 00:17:41.073938 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.073951 kubelet[2724]: W0517 00:17:41.073949 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.074003 kubelet[2724]: E0517 00:17:41.073958 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.081511 kubelet[2724]: E0517 00:17:41.081479 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.081511 kubelet[2724]: W0517 00:17:41.081506 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.081584 kubelet[2724]: E0517 00:17:41.081528 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.081843 kubelet[2724]: E0517 00:17:41.081822 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.081843 kubelet[2724]: W0517 00:17:41.081835 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.081907 kubelet[2724]: E0517 00:17:41.081850 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.082080 kubelet[2724]: E0517 00:17:41.082065 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.082080 kubelet[2724]: W0517 00:17:41.082079 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.082126 kubelet[2724]: E0517 00:17:41.082093 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.082317 kubelet[2724]: E0517 00:17:41.082303 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.082317 kubelet[2724]: W0517 00:17:41.082313 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.082377 kubelet[2724]: E0517 00:17:41.082328 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.082516 kubelet[2724]: E0517 00:17:41.082504 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.082516 kubelet[2724]: W0517 00:17:41.082514 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.082557 kubelet[2724]: E0517 00:17:41.082525 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.082759 kubelet[2724]: E0517 00:17:41.082734 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.082759 kubelet[2724]: W0517 00:17:41.082753 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.082813 kubelet[2724]: E0517 00:17:41.082770 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.083019 kubelet[2724]: E0517 00:17:41.083004 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.083046 kubelet[2724]: W0517 00:17:41.083018 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.083046 kubelet[2724]: E0517 00:17:41.083037 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.083277 kubelet[2724]: E0517 00:17:41.083244 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.083323 kubelet[2724]: W0517 00:17:41.083276 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.083350 kubelet[2724]: E0517 00:17:41.083327 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.083508 kubelet[2724]: E0517 00:17:41.083492 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.083533 kubelet[2724]: W0517 00:17:41.083506 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.083554 kubelet[2724]: E0517 00:17:41.083533 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.083743 kubelet[2724]: E0517 00:17:41.083728 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.083767 kubelet[2724]: W0517 00:17:41.083741 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.083767 kubelet[2724]: E0517 00:17:41.083757 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.084083 kubelet[2724]: E0517 00:17:41.084063 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.084122 kubelet[2724]: W0517 00:17:41.084084 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.084122 kubelet[2724]: E0517 00:17:41.084103 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.084396 kubelet[2724]: E0517 00:17:41.084380 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.084396 kubelet[2724]: W0517 00:17:41.084390 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.084475 kubelet[2724]: E0517 00:17:41.084405 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.084611 kubelet[2724]: E0517 00:17:41.084595 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.084611 kubelet[2724]: W0517 00:17:41.084607 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.084668 kubelet[2724]: E0517 00:17:41.084621 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.084808 kubelet[2724]: E0517 00:17:41.084793 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.084808 kubelet[2724]: W0517 00:17:41.084803 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.084877 kubelet[2724]: E0517 00:17:41.084813 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.085039 kubelet[2724]: E0517 00:17:41.085021 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.085039 kubelet[2724]: W0517 00:17:41.085036 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.085109 kubelet[2724]: E0517 00:17:41.085053 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.085360 kubelet[2724]: E0517 00:17:41.085342 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.085360 kubelet[2724]: W0517 00:17:41.085356 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.085436 kubelet[2724]: E0517 00:17:41.085372 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.085630 kubelet[2724]: E0517 00:17:41.085613 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.085630 kubelet[2724]: W0517 00:17:41.085627 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.085690 kubelet[2724]: E0517 00:17:41.085638 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.086158 kubelet[2724]: E0517 00:17:41.086128 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:17:41.086158 kubelet[2724]: W0517 00:17:41.086143 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:17:41.086158 kubelet[2724]: E0517 00:17:41.086154 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:17:41.954629 containerd[1579]: time="2025-05-17T00:17:41.954580497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:41.955435 containerd[1579]: time="2025-05-17T00:17:41.955399208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:17:41.956741 containerd[1579]: time="2025-05-17T00:17:41.956699488Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:41.958869 containerd[1579]: time="2025-05-17T00:17:41.958817767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:41.960706 containerd[1579]: time="2025-05-17T00:17:41.960378053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.401582119s" May 17 00:17:41.960706 containerd[1579]: time="2025-05-17T00:17:41.960418238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:17:41.964095 containerd[1579]: time="2025-05-17T00:17:41.964051059Z" level=info msg="CreateContainer within sandbox \"d32255ed28515af878163517a78d25892d1844924a8f83d63f67ea084871be01\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:17:41.981053 containerd[1579]: time="2025-05-17T00:17:41.981006990Z" level=info msg="CreateContainer within sandbox \"d32255ed28515af878163517a78d25892d1844924a8f83d63f67ea084871be01\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1e7d8c96f1a9ade00900700926fb04480b68fb1726f01b7cab885be013e3f164\"" May 17 00:17:41.981524 containerd[1579]: time="2025-05-17T00:17:41.981486867Z" level=info msg="StartContainer for \"1e7d8c96f1a9ade00900700926fb04480b68fb1726f01b7cab885be013e3f164\"" May 17 00:17:41.998594 kubelet[2724]: I0517 00:17:41.998556 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:17:41.999153 kubelet[2724]: E0517 00:17:41.999133 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:42.043907 containerd[1579]: time="2025-05-17T00:17:42.043847412Z" level=info msg="StartContainer for \"1e7d8c96f1a9ade00900700926fb04480b68fb1726f01b7cab885be013e3f164\" returns successfully" May 17 00:17:42.081016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e7d8c96f1a9ade00900700926fb04480b68fb1726f01b7cab885be013e3f164-rootfs.mount: Deactivated successfully. May 17 00:17:42.473327 containerd[1579]: time="2025-05-17T00:17:42.471655883Z" level=info msg="shim disconnected" id=1e7d8c96f1a9ade00900700926fb04480b68fb1726f01b7cab885be013e3f164 namespace=k8s.io May 17 00:17:42.473327 containerd[1579]: time="2025-05-17T00:17:42.473317430Z" level=warning msg="cleaning up after shim disconnected" id=1e7d8c96f1a9ade00900700926fb04480b68fb1726f01b7cab885be013e3f164 namespace=k8s.io May 17 00:17:42.473327 containerd[1579]: time="2025-05-17T00:17:42.473326978Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:17:42.932667 kubelet[2724]: E0517 00:17:42.932611 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdx7n" podUID="5d2460d1-6b11-4f05-a6fd-bf4b83ac6776" May 17 00:17:43.003796 containerd[1579]: time="2025-05-17T00:17:43.003725410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:17:43.028274 kubelet[2724]: I0517 00:17:43.028036 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf466d46b-w57lk" podStartSLOduration=3.248903413 podStartE2EDuration="5.028018405s" podCreationTimestamp="2025-05-17 00:17:38 +0000 UTC" firstStartedPulling="2025-05-17 00:17:38.778874165 +0000 UTC m=+17.939644850" lastFinishedPulling="2025-05-17 00:17:40.557989147 +0000 UTC m=+19.718759842" observedRunningTime="2025-05-17 00:17:41.004753846 +0000 UTC m=+20.165524541" watchObservedRunningTime="2025-05-17 00:17:43.028018405 +0000 UTC m=+22.188789100" May 17 00:17:44.931361 kubelet[2724]: E0517 00:17:44.931306 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdx7n" podUID="5d2460d1-6b11-4f05-a6fd-bf4b83ac6776" May 17 00:17:45.705571 containerd[1579]: time="2025-05-17T00:17:45.705518436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:45.706315 containerd[1579]: time="2025-05-17T00:17:45.706245726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:17:45.707349 containerd[1579]: time="2025-05-17T00:17:45.707314256Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:45.709509 containerd[1579]: time="2025-05-17T00:17:45.709480187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:45.710124 containerd[1579]: time="2025-05-17T00:17:45.710098975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 2.706312822s" May 17 00:17:45.710180 containerd[1579]: time="2025-05-17T00:17:45.710128641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:17:45.712183 containerd[1579]: time="2025-05-17T00:17:45.712124032Z" level=info msg="CreateContainer within sandbox \"d32255ed28515af878163517a78d25892d1844924a8f83d63f67ea084871be01\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:17:45.727802 containerd[1579]: time="2025-05-17T00:17:45.727751726Z" level=info msg="CreateContainer within sandbox \"d32255ed28515af878163517a78d25892d1844924a8f83d63f67ea084871be01\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c386971ac4352d5cfd13787591de7b587e777f3bc0c0c2d64bd2e28b12b49a8c\"" May 17 00:17:45.732503 containerd[1579]: time="2025-05-17T00:17:45.732455575Z" level=info msg="StartContainer for \"c386971ac4352d5cfd13787591de7b587e777f3bc0c0c2d64bd2e28b12b49a8c\"" May 17 00:17:45.791440 containerd[1579]: time="2025-05-17T00:17:45.791404584Z" level=info msg="StartContainer for \"c386971ac4352d5cfd13787591de7b587e777f3bc0c0c2d64bd2e28b12b49a8c\" returns successfully" May 17 00:17:46.931953 kubelet[2724]: E0517 00:17:46.931900 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdx7n" podUID="5d2460d1-6b11-4f05-a6fd-bf4b83ac6776" May 17 00:17:47.118401 systemd-resolved[1456]: Under memory pressure, flushing caches. May 17 00:17:47.118466 systemd-resolved[1456]: Flushed all caches. May 17 00:17:47.120284 systemd-journald[1156]: Under memory pressure, flushing caches. May 17 00:17:47.332002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c386971ac4352d5cfd13787591de7b587e777f3bc0c0c2d64bd2e28b12b49a8c-rootfs.mount: Deactivated successfully. May 17 00:17:47.348959 containerd[1579]: time="2025-05-17T00:17:47.348903480Z" level=info msg="shim disconnected" id=c386971ac4352d5cfd13787591de7b587e777f3bc0c0c2d64bd2e28b12b49a8c namespace=k8s.io May 17 00:17:47.349403 containerd[1579]: time="2025-05-17T00:17:47.348962330Z" level=warning msg="cleaning up after shim disconnected" id=c386971ac4352d5cfd13787591de7b587e777f3bc0c0c2d64bd2e28b12b49a8c namespace=k8s.io May 17 00:17:47.349403 containerd[1579]: time="2025-05-17T00:17:47.348974613Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:17:47.368715 kubelet[2724]: I0517 00:17:47.368679 2724 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:17:47.433813 kubelet[2724]: I0517 00:17:47.433778 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/deceb09e-4340-4f28-8a23-a33b54df6910-calico-apiserver-certs\") pod \"calico-apiserver-6fcc4f48fc-xtls5\" (UID: \"deceb09e-4340-4f28-8a23-a33b54df6910\") " pod="calico-apiserver/calico-apiserver-6fcc4f48fc-xtls5" May 17 00:17:47.433813 kubelet[2724]: I0517 00:17:47.433817 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-backend-key-pair\") pod \"whisker-b46bdf5fd-tkbfp\" (UID: \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\") " pod="calico-system/whisker-b46bdf5fd-tkbfp" May 17 00:17:47.434087 kubelet[2724]: I0517 00:17:47.433836 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv79b\" (UniqueName: \"kubernetes.io/projected/d70a794c-b705-4096-ab09-a29d9b66f140-kube-api-access-tv79b\") pod \"coredns-7c65d6cfc9-mm6b4\" (UID: \"d70a794c-b705-4096-ab09-a29d9b66f140\") " pod="kube-system/coredns-7c65d6cfc9-mm6b4" May 17 00:17:47.434087 kubelet[2724]: I0517 00:17:47.433854 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab4da613-d8f1-4a47-86db-18da03ede1ec-calico-apiserver-certs\") pod \"calico-apiserver-6fcc4f48fc-87trf\" (UID: \"ab4da613-d8f1-4a47-86db-18da03ede1ec\") " pod="calico-apiserver/calico-apiserver-6fcc4f48fc-87trf" May 17 00:17:47.434087 kubelet[2724]: I0517 00:17:47.433874 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64-config\") pod \"goldmane-8f77d7b6c-82l2f\" (UID: \"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64\") " pod="calico-system/goldmane-8f77d7b6c-82l2f" May 17 00:17:47.434087 kubelet[2724]: I0517 00:17:47.433892 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ldv\" (UniqueName: \"kubernetes.io/projected/deceb09e-4340-4f28-8a23-a33b54df6910-kube-api-access-l2ldv\") pod \"calico-apiserver-6fcc4f48fc-xtls5\" (UID: \"deceb09e-4340-4f28-8a23-a33b54df6910\") " pod="calico-apiserver/calico-apiserver-6fcc4f48fc-xtls5" May 17 00:17:47.434087 kubelet[2724]: I0517 00:17:47.433908 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgxqs\" (UniqueName: \"kubernetes.io/projected/ab4da613-d8f1-4a47-86db-18da03ede1ec-kube-api-access-pgxqs\") pod \"calico-apiserver-6fcc4f48fc-87trf\" (UID: \"ab4da613-d8f1-4a47-86db-18da03ede1ec\") " pod="calico-apiserver/calico-apiserver-6fcc4f48fc-87trf" May 17 00:17:47.434312 kubelet[2724]: I0517 00:17:47.433921 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqdb\" (UniqueName: \"kubernetes.io/projected/5188de2f-1d4a-4fed-8a5e-e1444595d2e7-kube-api-access-hzqdb\") pod \"coredns-7c65d6cfc9-vgszq\" (UID: \"5188de2f-1d4a-4fed-8a5e-e1444595d2e7\") " pod="kube-system/coredns-7c65d6cfc9-vgszq" May 17 00:17:47.434312 kubelet[2724]: I0517 00:17:47.433937 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cdc\" (UniqueName: \"kubernetes.io/projected/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-kube-api-access-x7cdc\") pod \"whisker-b46bdf5fd-tkbfp\" (UID: \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\") " pod="calico-system/whisker-b46bdf5fd-tkbfp" May 17 00:17:47.434312 kubelet[2724]: I0517 00:17:47.433952 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n49l\" (UniqueName: \"kubernetes.io/projected/1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64-kube-api-access-7n49l\") pod \"goldmane-8f77d7b6c-82l2f\" (UID: \"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64\") " pod="calico-system/goldmane-8f77d7b6c-82l2f" May 17 00:17:47.434312 kubelet[2724]: I0517 00:17:47.433968 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2d9321-e897-4d8e-ae8f-ddb6087819df-tigera-ca-bundle\") pod \"calico-kube-controllers-57bc89478d-f479x\" (UID: \"3d2d9321-e897-4d8e-ae8f-ddb6087819df\") " pod="calico-system/calico-kube-controllers-57bc89478d-f479x" May 17 00:17:47.434312 kubelet[2724]: I0517 00:17:47.433991 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25f84\" (UniqueName: \"kubernetes.io/projected/3d2d9321-e897-4d8e-ae8f-ddb6087819df-kube-api-access-25f84\") pod \"calico-kube-controllers-57bc89478d-f479x\" (UID: \"3d2d9321-e897-4d8e-ae8f-ddb6087819df\") " pod="calico-system/calico-kube-controllers-57bc89478d-f479x" May 17 00:17:47.434478 kubelet[2724]: I0517 00:17:47.434008 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5188de2f-1d4a-4fed-8a5e-e1444595d2e7-config-volume\") pod \"coredns-7c65d6cfc9-vgszq\" (UID: \"5188de2f-1d4a-4fed-8a5e-e1444595d2e7\") " pod="kube-system/coredns-7c65d6cfc9-vgszq" May 17 00:17:47.434478 kubelet[2724]: I0517 00:17:47.434022 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-ca-bundle\") pod \"whisker-b46bdf5fd-tkbfp\" (UID: \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\") " pod="calico-system/whisker-b46bdf5fd-tkbfp" May 17 00:17:47.434478 kubelet[2724]: I0517 00:17:47.434036 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-82l2f\" (UID: \"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64\") " pod="calico-system/goldmane-8f77d7b6c-82l2f" May 17 00:17:47.434478 kubelet[2724]: I0517 00:17:47.434050 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d70a794c-b705-4096-ab09-a29d9b66f140-config-volume\") pod \"coredns-7c65d6cfc9-mm6b4\" (UID: \"d70a794c-b705-4096-ab09-a29d9b66f140\") " pod="kube-system/coredns-7c65d6cfc9-mm6b4" May 17 00:17:47.434478 kubelet[2724]: I0517 00:17:47.434066 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-82l2f\" (UID: \"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64\") " pod="calico-system/goldmane-8f77d7b6c-82l2f" May 17 00:17:47.664715 containerd[1579]: time="2025-05-17T00:17:47.664316812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:17:47.707414 kubelet[2724]: E0517 00:17:47.707377 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:47.707876 containerd[1579]: time="2025-05-17T00:17:47.707844131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm6b4,Uid:d70a794c-b705-4096-ab09-a29d9b66f140,Namespace:kube-system,Attempt:0,}" May 17 00:17:47.712881 containerd[1579]: time="2025-05-17T00:17:47.712828577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-87trf,Uid:ab4da613-d8f1-4a47-86db-18da03ede1ec,Namespace:calico-apiserver,Attempt:0,}" May 17 00:17:47.713743 containerd[1579]: time="2025-05-17T00:17:47.713704657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b46bdf5fd-tkbfp,Uid:ef3e08eb-8caa-48fa-8f23-213ffd83f8a3,Namespace:calico-system,Attempt:0,}" May 17 00:17:47.716243 kubelet[2724]: E0517 00:17:47.716060 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:47.716403 containerd[1579]: time="2025-05-17T00:17:47.716292319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-82l2f,Uid:1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64,Namespace:calico-system,Attempt:0,}" May 17 00:17:47.716501 containerd[1579]: time="2025-05-17T00:17:47.716477075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vgszq,Uid:5188de2f-1d4a-4fed-8a5e-e1444595d2e7,Namespace:kube-system,Attempt:0,}" May 17 00:17:47.720007 containerd[1579]: time="2025-05-17T00:17:47.719970612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-xtls5,Uid:deceb09e-4340-4f28-8a23-a33b54df6910,Namespace:calico-apiserver,Attempt:0,}" May 17 00:17:47.721436 containerd[1579]: time="2025-05-17T00:17:47.721402361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bc89478d-f479x,Uid:3d2d9321-e897-4d8e-ae8f-ddb6087819df,Namespace:calico-system,Attempt:0,}" May 17 00:17:47.884522 containerd[1579]: time="2025-05-17T00:17:47.884468311Z" level=error msg="Failed to destroy network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.886954 containerd[1579]: time="2025-05-17T00:17:47.886904249Z" level=error msg="Failed to destroy network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.888799 containerd[1579]: time="2025-05-17T00:17:47.888706131Z" level=error msg="Failed to destroy network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.889030 containerd[1579]: time="2025-05-17T00:17:47.888997044Z" level=error msg="encountered an error cleaning up failed sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.889162 containerd[1579]: time="2025-05-17T00:17:47.889139301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vgszq,Uid:5188de2f-1d4a-4fed-8a5e-e1444595d2e7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.889704 containerd[1579]: time="2025-05-17T00:17:47.889442498Z" level=error msg="encountered an error cleaning up failed sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.889704 containerd[1579]: time="2025-05-17T00:17:47.889478295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b46bdf5fd-tkbfp,Uid:ef3e08eb-8caa-48fa-8f23-213ffd83f8a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.890059 containerd[1579]: time="2025-05-17T00:17:47.889056416Z" level=error msg="encountered an error cleaning up failed sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.890059 containerd[1579]: time="2025-05-17T00:17:47.889988650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bc89478d-f479x,Uid:3d2d9321-e897-4d8e-ae8f-ddb6087819df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.890517 containerd[1579]: time="2025-05-17T00:17:47.890479449Z" level=error msg="Failed to destroy network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.890929 containerd[1579]: time="2025-05-17T00:17:47.890905035Z" level=error msg="encountered an error cleaning up failed sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.891024 containerd[1579]: time="2025-05-17T00:17:47.891000553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-87trf,Uid:ab4da613-d8f1-4a47-86db-18da03ede1ec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.892709 containerd[1579]: time="2025-05-17T00:17:47.892664588Z" level=error msg="Failed to destroy network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.892990 containerd[1579]: time="2025-05-17T00:17:47.892966201Z" level=error msg="encountered an error cleaning up failed sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.893021 containerd[1579]: time="2025-05-17T00:17:47.892999784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-82l2f,Uid:1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.893841 containerd[1579]: time="2025-05-17T00:17:47.893812015Z" level=error msg="Failed to destroy network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.894228 containerd[1579]: time="2025-05-17T00:17:47.894204820Z" level=error msg="encountered an error cleaning up failed sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.894358 containerd[1579]: time="2025-05-17T00:17:47.894309896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm6b4,Uid:d70a794c-b705-4096-ab09-a29d9b66f140,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.899164 kubelet[2724]: E0517 00:17:47.899107 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.899164 kubelet[2724]: E0517 00:17:47.899147 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.899298 kubelet[2724]: E0517 00:17:47.899168 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.899298 kubelet[2724]: E0517 00:17:47.899192 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vgszq" May 17 00:17:47.899298 kubelet[2724]: E0517 00:17:47.899197 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b46bdf5fd-tkbfp" May 17 00:17:47.899298 kubelet[2724]: E0517 00:17:47.899115 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.899437 kubelet[2724]: E0517 00:17:47.899213 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vgszq" May 17 00:17:47.899437 kubelet[2724]: E0517 00:17:47.899220 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b46bdf5fd-tkbfp" May 17 00:17:47.899437 kubelet[2724]: E0517 00:17:47.899201 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-82l2f" May 17 00:17:47.899437 kubelet[2724]: E0517 00:17:47.899266 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-82l2f" May 17 00:17:47.899534 kubelet[2724]: E0517 00:17:47.899273 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vgszq_kube-system(5188de2f-1d4a-4fed-8a5e-e1444595d2e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vgszq_kube-system(5188de2f-1d4a-4fed-8a5e-e1444595d2e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vgszq" podUID="5188de2f-1d4a-4fed-8a5e-e1444595d2e7" May 17 00:17:47.899534 kubelet[2724]: E0517 00:17:47.899283 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b46bdf5fd-tkbfp_calico-system(ef3e08eb-8caa-48fa-8f23-213ffd83f8a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b46bdf5fd-tkbfp_calico-system(ef3e08eb-8caa-48fa-8f23-213ffd83f8a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b46bdf5fd-tkbfp" podUID="ef3e08eb-8caa-48fa-8f23-213ffd83f8a3" May 17 00:17:47.899534 kubelet[2724]: E0517 00:17:47.899221 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mm6b4" May 17 00:17:47.899648 kubelet[2724]: E0517 00:17:47.899325 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mm6b4" May 17 00:17:47.899648 kubelet[2724]: E0517 00:17:47.899151 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.899648 kubelet[2724]: E0517 00:17:47.899349 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mm6b4_kube-system(d70a794c-b705-4096-ab09-a29d9b66f140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mm6b4_kube-system(d70a794c-b705-4096-ab09-a29d9b66f140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mm6b4" podUID="d70a794c-b705-4096-ab09-a29d9b66f140" May 17 00:17:47.899741 kubelet[2724]: E0517 00:17:47.899304 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-82l2f_calico-system(1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-82l2f_calico-system(1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:17:47.899741 kubelet[2724]: E0517 00:17:47.899364 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-87trf" May 17 00:17:47.899741 kubelet[2724]: E0517 00:17:47.899378 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-87trf" May 17 00:17:47.899832 kubelet[2724]: E0517 00:17:47.899402 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcc4f48fc-87trf_calico-apiserver(ab4da613-d8f1-4a47-86db-18da03ede1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcc4f48fc-87trf_calico-apiserver(ab4da613-d8f1-4a47-86db-18da03ede1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-87trf" podUID="ab4da613-d8f1-4a47-86db-18da03ede1ec" May 17 00:17:47.899832 kubelet[2724]: E0517 00:17:47.899107 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.899832 kubelet[2724]: E0517 00:17:47.899430 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57bc89478d-f479x" May 17 00:17:47.899915 kubelet[2724]: E0517 00:17:47.899442 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57bc89478d-f479x" May 17 00:17:47.899915 kubelet[2724]: E0517 00:17:47.899468 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57bc89478d-f479x_calico-system(3d2d9321-e897-4d8e-ae8f-ddb6087819df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57bc89478d-f479x_calico-system(3d2d9321-e897-4d8e-ae8f-ddb6087819df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57bc89478d-f479x" podUID="3d2d9321-e897-4d8e-ae8f-ddb6087819df" May 17 00:17:47.902563 containerd[1579]: time="2025-05-17T00:17:47.902528495Z" level=error msg="Failed to destroy network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.902902 containerd[1579]: time="2025-05-17T00:17:47.902877667Z" level=error msg="encountered an error cleaning up failed sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.902945 containerd[1579]: time="2025-05-17T00:17:47.902923764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-xtls5,Uid:deceb09e-4340-4f28-8a23-a33b54df6910,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.903202 kubelet[2724]: E0517 00:17:47.903152 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:47.903339 kubelet[2724]: E0517 00:17:47.903220 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-xtls5" May 17 00:17:47.903339 kubelet[2724]: E0517 00:17:47.903265 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-xtls5" May 17 00:17:47.903339 kubelet[2724]: E0517 00:17:47.903310 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcc4f48fc-xtls5_calico-apiserver(deceb09e-4340-4f28-8a23-a33b54df6910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcc4f48fc-xtls5_calico-apiserver(deceb09e-4340-4f28-8a23-a33b54df6910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-xtls5" podUID="deceb09e-4340-4f28-8a23-a33b54df6910" May 17 00:17:48.659643 kubelet[2724]: I0517 00:17:48.659601 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:17:48.660894 kubelet[2724]: I0517 00:17:48.660534 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:17:48.661671 kubelet[2724]: I0517 00:17:48.661637 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:17:48.662632 kubelet[2724]: I0517 00:17:48.662614 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:17:48.693214 containerd[1579]: time="2025-05-17T00:17:48.693160482Z" level=info msg="StopPodSandbox for \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\"" May 17 00:17:48.693871 containerd[1579]: time="2025-05-17T00:17:48.693450845Z" level=info msg="StopPodSandbox for \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\"" May 17 00:17:48.693871 containerd[1579]: time="2025-05-17T00:17:48.693675806Z" level=info msg="StopPodSandbox for \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\"" May 17 00:17:48.693978 containerd[1579]: time="2025-05-17T00:17:48.693936113Z" level=info msg="StopPodSandbox for \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\"" May 17 00:17:48.694921 containerd[1579]: time="2025-05-17T00:17:48.694880922Z" level=info msg="Ensure that sandbox 3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936 in task-service has been cleanup successfully" May 17 00:17:48.694990 containerd[1579]: time="2025-05-17T00:17:48.694886752Z" level=info msg="Ensure that sandbox 5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2 in task-service has been cleanup successfully" May 17 00:17:48.695303 containerd[1579]: time="2025-05-17T00:17:48.694892864Z" level=info msg="Ensure that sandbox d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364 in task-service has been cleanup successfully" May 17 00:17:48.695348 kubelet[2724]: I0517 00:17:48.695184 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:17:48.696327 containerd[1579]: time="2025-05-17T00:17:48.694894227Z" level=info msg="Ensure that sandbox d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5 in task-service has been cleanup successfully" May 17 00:17:48.708461 containerd[1579]: time="2025-05-17T00:17:48.708418858Z" level=info msg="StopPodSandbox for \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\"" May 17 00:17:48.709595 containerd[1579]: time="2025-05-17T00:17:48.709283897Z" level=info msg="Ensure that sandbox 4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64 in task-service has been cleanup successfully" May 17 00:17:48.710633 kubelet[2724]: I0517 00:17:48.710605 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:17:48.711223 containerd[1579]: time="2025-05-17T00:17:48.711196987Z" level=info msg="StopPodSandbox for \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\"" May 17 00:17:48.711546 containerd[1579]: time="2025-05-17T00:17:48.711530031Z" level=info msg="Ensure that sandbox 3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76 in task-service has been cleanup successfully" May 17 00:17:48.714206 kubelet[2724]: I0517 00:17:48.714156 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:17:48.715886 containerd[1579]: time="2025-05-17T00:17:48.715825560Z" level=info msg="StopPodSandbox for \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\"" May 17 00:17:48.716759 containerd[1579]: time="2025-05-17T00:17:48.716567658Z" level=info msg="Ensure that sandbox 9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0 in task-service has been cleanup successfully" May 17 00:17:48.744778 containerd[1579]: time="2025-05-17T00:17:48.744717029Z" level=error msg="StopPodSandbox for \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\" failed" error="failed to destroy network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:48.746135 kubelet[2724]: E0517 00:17:48.745991 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:17:48.746135 kubelet[2724]: E0517 00:17:48.746093 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5"} May 17 00:17:48.746238 kubelet[2724]: E0517 00:17:48.746142 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d2d9321-e897-4d8e-ae8f-ddb6087819df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:48.746238 kubelet[2724]: E0517 00:17:48.746164 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d2d9321-e897-4d8e-ae8f-ddb6087819df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57bc89478d-f479x" podUID="3d2d9321-e897-4d8e-ae8f-ddb6087819df" May 17 00:17:48.747944 containerd[1579]: time="2025-05-17T00:17:48.747889136Z" level=error msg="StopPodSandbox for \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\" failed" error="failed to destroy network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:48.748415 kubelet[2724]: E0517 00:17:48.748303 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:17:48.748415 kubelet[2724]: E0517 00:17:48.748338 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364"} May 17 00:17:48.748415 kubelet[2724]: E0517 00:17:48.748363 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab4da613-d8f1-4a47-86db-18da03ede1ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:48.748415 kubelet[2724]: E0517 00:17:48.748383 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab4da613-d8f1-4a47-86db-18da03ede1ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-87trf" podUID="ab4da613-d8f1-4a47-86db-18da03ede1ec" May 17 00:17:48.759630 containerd[1579]: time="2025-05-17T00:17:48.759538497Z" level=error msg="StopPodSandbox for \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\" failed" error="failed to destroy network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:48.759845 kubelet[2724]: E0517 00:17:48.759787 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:17:48.759887 kubelet[2724]: E0517 00:17:48.759841 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76"} May 17 00:17:48.759887 kubelet[2724]: E0517 00:17:48.759871 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5188de2f-1d4a-4fed-8a5e-e1444595d2e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:48.759980 kubelet[2724]: E0517 00:17:48.759892 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5188de2f-1d4a-4fed-8a5e-e1444595d2e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vgszq" podUID="5188de2f-1d4a-4fed-8a5e-e1444595d2e7" May 17 00:17:48.761141 containerd[1579]: time="2025-05-17T00:17:48.761111271Z" level=error msg="StopPodSandbox for \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\" failed" error="failed to destroy network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:48.761370 kubelet[2724]: E0517 00:17:48.761348 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:17:48.761510 kubelet[2724]: E0517 00:17:48.761461 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936"} May 17 00:17:48.761510 kubelet[2724]: E0517 00:17:48.761488 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:48.761666 kubelet[2724]: E0517 00:17:48.761639 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b46bdf5fd-tkbfp" podUID="ef3e08eb-8caa-48fa-8f23-213ffd83f8a3" May 17 00:17:48.761746 containerd[1579]: time="2025-05-17T00:17:48.761707406Z" level=error msg="StopPodSandbox for \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\" failed" error="failed to destroy network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:48.761898 kubelet[2724]: E0517 00:17:48.761871 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:17:48.761898 kubelet[2724]: E0517 00:17:48.761896 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2"} May 17 00:17:48.761992 kubelet[2724]: E0517 00:17:48.761914 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d70a794c-b705-4096-ab09-a29d9b66f140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:48.761992 kubelet[2724]: E0517 00:17:48.761929 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d70a794c-b705-4096-ab09-a29d9b66f140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mm6b4" podUID="d70a794c-b705-4096-ab09-a29d9b66f140" May 17 00:17:48.765891 containerd[1579]: time="2025-05-17T00:17:48.765848257Z" level=error msg="StopPodSandbox for \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\" failed" error="failed to destroy network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:48.766064 kubelet[2724]: E0517 00:17:48.766022 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:17:48.766064 kubelet[2724]: E0517 00:17:48.766051 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64"} May 17 00:17:48.766181 kubelet[2724]: E0517 00:17:48.766071 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"deceb09e-4340-4f28-8a23-a33b54df6910\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:48.766181 kubelet[2724]: E0517 00:17:48.766096 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"deceb09e-4340-4f28-8a23-a33b54df6910\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-xtls5" podUID="deceb09e-4340-4f28-8a23-a33b54df6910" May 17 00:17:48.767111 containerd[1579]: time="2025-05-17T00:17:48.767065003Z" level=error msg="StopPodSandbox for \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\" failed" error="failed to destroy network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:48.767246 kubelet[2724]: E0517 00:17:48.767200 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:17:48.767246 kubelet[2724]: E0517 00:17:48.767235 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0"} May 17 00:17:48.767360 kubelet[2724]: E0517 00:17:48.767267 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:48.767360 kubelet[2724]: E0517 00:17:48.767283 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:17:48.936724 containerd[1579]: time="2025-05-17T00:17:48.936577453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdx7n,Uid:5d2460d1-6b11-4f05-a6fd-bf4b83ac6776,Namespace:calico-system,Attempt:0,}" May 17 00:17:49.049890 containerd[1579]: time="2025-05-17T00:17:49.049831279Z" level=error msg="Failed to destroy network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:49.050458 containerd[1579]: time="2025-05-17T00:17:49.050290979Z" level=error msg="encountered an error cleaning up failed sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:49.050458 containerd[1579]: time="2025-05-17T00:17:49.050343678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdx7n,Uid:5d2460d1-6b11-4f05-a6fd-bf4b83ac6776,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:49.051044 kubelet[2724]: E0517 00:17:49.050796 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:49.051044 kubelet[2724]: E0517 00:17:49.050891 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cdx7n" May 17 00:17:49.051044 kubelet[2724]: E0517 00:17:49.050917 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cdx7n" May 17 00:17:49.051194 kubelet[2724]: E0517 00:17:49.050979 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cdx7n_calico-system(5d2460d1-6b11-4f05-a6fd-bf4b83ac6776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cdx7n_calico-system(5d2460d1-6b11-4f05-a6fd-bf4b83ac6776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cdx7n" podUID="5d2460d1-6b11-4f05-a6fd-bf4b83ac6776" May 17 00:17:49.053403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821-shm.mount: Deactivated successfully. May 17 00:17:49.170301 systemd-journald[1156]: Under memory pressure, flushing caches. May 17 00:17:49.166533 systemd-resolved[1456]: Under memory pressure, flushing caches. May 17 00:17:49.166570 systemd-resolved[1456]: Flushed all caches. May 17 00:17:49.716300 kubelet[2724]: I0517 00:17:49.716269 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:17:49.717592 containerd[1579]: time="2025-05-17T00:17:49.717530224Z" level=info msg="StopPodSandbox for \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\"" May 17 00:17:49.718160 containerd[1579]: time="2025-05-17T00:17:49.718136809Z" level=info msg="Ensure that sandbox 040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821 in task-service has been cleanup successfully" May 17 00:17:50.036408 containerd[1579]: time="2025-05-17T00:17:50.036304668Z" level=error msg="StopPodSandbox for \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\" failed" error="failed to destroy network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:17:50.036747 kubelet[2724]: E0517 00:17:50.036534 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:17:50.036747 kubelet[2724]: E0517 00:17:50.036584 2724 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821"} May 17 00:17:50.036747 kubelet[2724]: E0517 00:17:50.036619 2724 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:17:50.036747 kubelet[2724]: E0517 00:17:50.036642 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cdx7n" podUID="5d2460d1-6b11-4f05-a6fd-bf4b83ac6776" May 17 00:17:51.742967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193389689.mount: Deactivated successfully. May 17 00:17:52.375403 containerd[1579]: time="2025-05-17T00:17:52.375351357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:52.376180 containerd[1579]: time="2025-05-17T00:17:52.376146316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:17:52.380544 containerd[1579]: time="2025-05-17T00:17:52.380506920Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:52.382560 containerd[1579]: time="2025-05-17T00:17:52.382514830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:52.383069 containerd[1579]: time="2025-05-17T00:17:52.383027821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 4.718665834s" May 17 00:17:52.383069 containerd[1579]: time="2025-05-17T00:17:52.383056675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:17:52.391101 containerd[1579]: time="2025-05-17T00:17:52.391051946Z" level=info msg="CreateContainer within sandbox \"d32255ed28515af878163517a78d25892d1844924a8f83d63f67ea084871be01\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:17:52.406507 containerd[1579]: time="2025-05-17T00:17:52.406436623Z" level=info msg="CreateContainer within sandbox \"d32255ed28515af878163517a78d25892d1844924a8f83d63f67ea084871be01\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc9f7b20b3faa696ca86915dc66a09f23dea53664c6e83f3257be8ff77c5a08a\"" May 17 00:17:52.407218 containerd[1579]: time="2025-05-17T00:17:52.407172941Z" level=info msg="StartContainer for \"bc9f7b20b3faa696ca86915dc66a09f23dea53664c6e83f3257be8ff77c5a08a\"" May 17 00:17:52.488118 containerd[1579]: time="2025-05-17T00:17:52.488073623Z" level=info msg="StartContainer for \"bc9f7b20b3faa696ca86915dc66a09f23dea53664c6e83f3257be8ff77c5a08a\" returns successfully" May 17 00:17:52.564679 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:17:52.564799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:17:52.624132 containerd[1579]: time="2025-05-17T00:17:52.624086751Z" level=info msg="StopPodSandbox for \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\"" May 17 00:17:52.810304 kubelet[2724]: I0517 00:17:52.809913 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lnvr4" podStartSLOduration=1.5897510160000001 podStartE2EDuration="14.809894817s" podCreationTimestamp="2025-05-17 00:17:38 +0000 UTC" firstStartedPulling="2025-05-17 00:17:39.163548825 +0000 UTC m=+18.324319520" lastFinishedPulling="2025-05-17 00:17:52.383692626 +0000 UTC m=+31.544463321" observedRunningTime="2025-05-17 00:17:52.808902939 +0000 UTC m=+31.969673624" watchObservedRunningTime="2025-05-17 00:17:52.809894817 +0000 UTC m=+31.970665512" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.687 [INFO][4008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.687 [INFO][4008] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" iface="eth0" netns="/var/run/netns/cni-5e69c8ce-1c7a-14b0-4841-c73fd86f1431" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.687 [INFO][4008] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" iface="eth0" netns="/var/run/netns/cni-5e69c8ce-1c7a-14b0-4841-c73fd86f1431" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.689 [INFO][4008] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" iface="eth0" netns="/var/run/netns/cni-5e69c8ce-1c7a-14b0-4841-c73fd86f1431" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.689 [INFO][4008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.689 [INFO][4008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.745 [INFO][4024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.746 [INFO][4024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.746 [INFO][4024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.798 [WARNING][4024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.798 [INFO][4024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.800 [INFO][4024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:52.810807 containerd[1579]: 2025-05-17 00:17:52.805 [INFO][4008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:17:52.811107 containerd[1579]: time="2025-05-17T00:17:52.810799831Z" level=info msg="TearDown network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\" successfully" May 17 00:17:52.811107 containerd[1579]: time="2025-05-17T00:17:52.810822484Z" level=info msg="StopPodSandbox for \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\" returns successfully" May 17 00:17:52.814850 systemd[1]: run-netns-cni\x2d5e69c8ce\x2d1c7a\x2d14b0\x2d4841\x2dc73fd86f1431.mount: Deactivated successfully. May 17 00:17:52.869566 kubelet[2724]: I0517 00:17:52.869519 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7cdc\" (UniqueName: \"kubernetes.io/projected/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-kube-api-access-x7cdc\") pod \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\" (UID: \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\") " May 17 00:17:52.869566 kubelet[2724]: I0517 00:17:52.869566 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-backend-key-pair\") pod \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\" (UID: \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\") " May 17 00:17:52.869716 kubelet[2724]: I0517 00:17:52.869595 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-ca-bundle\") pod \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\" (UID: \"ef3e08eb-8caa-48fa-8f23-213ffd83f8a3\") " May 17 00:17:52.870066 kubelet[2724]: I0517 00:17:52.870038 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ef3e08eb-8caa-48fa-8f23-213ffd83f8a3" (UID: "ef3e08eb-8caa-48fa-8f23-213ffd83f8a3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:17:52.878312 systemd[1]: var-lib-kubelet-pods-ef3e08eb\x2d8caa\x2d48fa\x2d8f23\x2d213ffd83f8a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx7cdc.mount: Deactivated successfully. May 17 00:17:52.879890 kubelet[2724]: I0517 00:17:52.879102 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-kube-api-access-x7cdc" (OuterVolumeSpecName: "kube-api-access-x7cdc") pod "ef3e08eb-8caa-48fa-8f23-213ffd83f8a3" (UID: "ef3e08eb-8caa-48fa-8f23-213ffd83f8a3"). InnerVolumeSpecName "kube-api-access-x7cdc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:17:52.879890 kubelet[2724]: I0517 00:17:52.879659 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ef3e08eb-8caa-48fa-8f23-213ffd83f8a3" (UID: "ef3e08eb-8caa-48fa-8f23-213ffd83f8a3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:17:52.970468 kubelet[2724]: I0517 00:17:52.970418 2724 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 17 00:17:52.970468 kubelet[2724]: I0517 00:17:52.970452 2724 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 17 00:17:52.970468 kubelet[2724]: I0517 00:17:52.970461 2724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7cdc\" (UniqueName: \"kubernetes.io/projected/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3-kube-api-access-x7cdc\") on node \"localhost\" DevicePath \"\"" May 17 00:17:53.744112 systemd[1]: var-lib-kubelet-pods-ef3e08eb\x2d8caa\x2d48fa\x2d8f23\x2d213ffd83f8a3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:17:53.874565 kubelet[2724]: I0517 00:17:53.874508 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b3843bf5-7516-4c9f-923b-822352f7eab5-whisker-backend-key-pair\") pod \"whisker-7f7f9c875b-6g4bk\" (UID: \"b3843bf5-7516-4c9f-923b-822352f7eab5\") " pod="calico-system/whisker-7f7f9c875b-6g4bk" May 17 00:17:53.874565 kubelet[2724]: I0517 00:17:53.874565 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrtp\" (UniqueName: \"kubernetes.io/projected/b3843bf5-7516-4c9f-923b-822352f7eab5-kube-api-access-plrtp\") pod \"whisker-7f7f9c875b-6g4bk\" (UID: \"b3843bf5-7516-4c9f-923b-822352f7eab5\") " pod="calico-system/whisker-7f7f9c875b-6g4bk" May 17 00:17:53.875062 kubelet[2724]: I0517 00:17:53.874593 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3843bf5-7516-4c9f-923b-822352f7eab5-whisker-ca-bundle\") pod \"whisker-7f7f9c875b-6g4bk\" (UID: \"b3843bf5-7516-4c9f-923b-822352f7eab5\") " pod="calico-system/whisker-7f7f9c875b-6g4bk" May 17 00:17:54.074798 containerd[1579]: time="2025-05-17T00:17:54.074757889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7f9c875b-6g4bk,Uid:b3843bf5-7516-4c9f-923b-822352f7eab5,Namespace:calico-system,Attempt:0,}" May 17 00:17:54.744756 systemd-networkd[1242]: calid6832d5672d: Link UP May 17 00:17:54.745518 systemd-networkd[1242]: calid6832d5672d: Gained carrier May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.565 [INFO][4183] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.576 [INFO][4183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0 whisker-7f7f9c875b- calico-system b3843bf5-7516-4c9f-923b-822352f7eab5 884 0 2025-05-17 00:17:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f7f9c875b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f7f9c875b-6g4bk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid6832d5672d [] [] }} ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.576 [INFO][4183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.624 [INFO][4198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" HandleID="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Workload="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.624 [INFO][4198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" HandleID="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Workload="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f7f9c875b-6g4bk", "timestamp":"2025-05-17 00:17:54.62456953 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.624 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.624 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.624 [INFO][4198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.637 [INFO][4198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.644 [INFO][4198] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.648 [INFO][4198] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.650 [INFO][4198] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.652 [INFO][4198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.653 [INFO][4198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.655 [INFO][4198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.698 [INFO][4198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.732 [INFO][4198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.732 [INFO][4198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" host="localhost" May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.732 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:54.814851 containerd[1579]: 2025-05-17 00:17:54.732 [INFO][4198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" HandleID="k8s-pod-network.ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Workload="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" May 17 00:17:54.815938 containerd[1579]: 2025-05-17 00:17:54.736 [INFO][4183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0", GenerateName:"whisker-7f7f9c875b-", Namespace:"calico-system", SelfLink:"", UID:"b3843bf5-7516-4c9f-923b-822352f7eab5", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f7f9c875b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f7f9c875b-6g4bk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid6832d5672d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:54.815938 containerd[1579]: 2025-05-17 00:17:54.736 [INFO][4183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" May 17 00:17:54.815938 containerd[1579]: 2025-05-17 00:17:54.736 [INFO][4183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6832d5672d ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" May 17 00:17:54.815938 containerd[1579]: 2025-05-17 00:17:54.745 [INFO][4183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" May 17 00:17:54.815938 containerd[1579]: 2025-05-17 00:17:54.746 [INFO][4183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0", GenerateName:"whisker-7f7f9c875b-", Namespace:"calico-system", SelfLink:"", UID:"b3843bf5-7516-4c9f-923b-822352f7eab5", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f7f9c875b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e", Pod:"whisker-7f7f9c875b-6g4bk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid6832d5672d", MAC:"ca:df:bd:f5:de:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:54.815938 containerd[1579]: 2025-05-17 00:17:54.811 [INFO][4183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e" Namespace="calico-system" Pod="whisker-7f7f9c875b-6g4bk" WorkloadEndpoint="localhost-k8s-whisker--7f7f9c875b--6g4bk-eth0" May 17 00:17:54.883043 containerd[1579]: time="2025-05-17T00:17:54.882463765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:54.883043 containerd[1579]: time="2025-05-17T00:17:54.883009587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:54.883043 containerd[1579]: time="2025-05-17T00:17:54.883020858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:54.883276 containerd[1579]: time="2025-05-17T00:17:54.883097522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:54.912379 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:54.935068 kubelet[2724]: I0517 00:17:54.935025 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3e08eb-8caa-48fa-8f23-213ffd83f8a3" path="/var/lib/kubelet/pods/ef3e08eb-8caa-48fa-8f23-213ffd83f8a3/volumes" May 17 00:17:54.944287 containerd[1579]: time="2025-05-17T00:17:54.944163138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7f9c875b-6g4bk,Uid:b3843bf5-7516-4c9f-923b-822352f7eab5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed575e0555b57aac11483cdbde67ecd1c7af586b0be0754e6cdae43465212d8e\"" May 17 00:17:54.945831 containerd[1579]: time="2025-05-17T00:17:54.945726787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:17:55.179206 containerd[1579]: time="2025-05-17T00:17:55.179123775Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:55.241644 containerd[1579]: time="2025-05-17T00:17:55.241572192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:55.241745 containerd[1579]: time="2025-05-17T00:17:55.241630471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:17:55.241940 kubelet[2724]: E0517 00:17:55.241872 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:55.242036 kubelet[2724]: E0517 00:17:55.241959 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:55.244232 kubelet[2724]: E0517 00:17:55.244168 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5fa0e8b210c943fe9a524550ec7c8a90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plrtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f7f9c875b-6g4bk_calico-system(b3843bf5-7516-4c9f-923b-822352f7eab5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:55.246061 containerd[1579]: time="2025-05-17T00:17:55.246032677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:17:55.495764 containerd[1579]: time="2025-05-17T00:17:55.495618807Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:55.546398 containerd[1579]: time="2025-05-17T00:17:55.546348415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:55.546574 containerd[1579]: time="2025-05-17T00:17:55.546403107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:17:55.546777 kubelet[2724]: E0517 00:17:55.546723 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:55.546888 kubelet[2724]: E0517 00:17:55.546782 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:55.546960 kubelet[2724]: E0517 00:17:55.546929 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plrtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f7f9c875b-6g4bk_calico-system(b3843bf5-7516-4c9f-923b-822352f7eab5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:55.548837 kubelet[2724]: E0517 00:17:55.548778 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f7f9c875b-6g4bk" podUID="b3843bf5-7516-4c9f-923b-822352f7eab5" May 17 00:17:55.730703 kubelet[2724]: E0517 00:17:55.730647 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7f7f9c875b-6g4bk" podUID="b3843bf5-7516-4c9f-923b-822352f7eab5" May 17 00:17:56.743999 kubelet[2724]: E0517 00:17:56.743881 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7f7f9c875b-6g4bk" podUID="b3843bf5-7516-4c9f-923b-822352f7eab5" May 17 00:17:56.782465 systemd-networkd[1242]: calid6832d5672d: Gained IPv6LL May 17 00:17:57.222643 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:45154.service - OpenSSH per-connection server daemon (10.0.0.1:45154). May 17 00:17:57.264499 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 45154 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:57.266968 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:57.271852 systemd-logind[1564]: New session 8 of user core. May 17 00:17:57.281491 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:17:57.427095 sshd[4314]: pam_unix(sshd:session): session closed for user core May 17 00:17:57.431232 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:45154.service: Deactivated successfully. May 17 00:17:57.433632 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. May 17 00:17:57.433774 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:17:57.434929 systemd-logind[1564]: Removed session 8. May 17 00:17:58.934040 containerd[1579]: time="2025-05-17T00:17:58.933606851Z" level=info msg="StopPodSandbox for \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\"" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:58.979 [INFO][4384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:58.979 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" iface="eth0" netns="/var/run/netns/cni-be0a1831-95db-4b6c-2ebc-119f8d71f16d" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:58.979 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" iface="eth0" netns="/var/run/netns/cni-be0a1831-95db-4b6c-2ebc-119f8d71f16d" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:58.980 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" iface="eth0" netns="/var/run/netns/cni-be0a1831-95db-4b6c-2ebc-119f8d71f16d" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:58.980 [INFO][4384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:58.980 [INFO][4384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:59.000 [INFO][4400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:59.000 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:59.000 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:59.005 [WARNING][4400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:59.005 [INFO][4400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:59.006 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:59.012009 containerd[1579]: 2025-05-17 00:17:59.009 [INFO][4384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:17:59.012833 containerd[1579]: time="2025-05-17T00:17:59.012785714Z" level=info msg="TearDown network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\" successfully" May 17 00:17:59.012833 containerd[1579]: time="2025-05-17T00:17:59.012821200Z" level=info msg="StopPodSandbox for \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\" returns successfully" May 17 00:17:59.013839 containerd[1579]: time="2025-05-17T00:17:59.013777352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-82l2f,Uid:1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64,Namespace:calico-system,Attempt:1,}" May 17 00:17:59.015692 systemd[1]: run-netns-cni\x2dbe0a1831\x2d95db\x2d4b6c\x2d2ebc\x2d119f8d71f16d.mount: Deactivated successfully. May 17 00:17:59.112585 systemd-networkd[1242]: cali4e9afa9cfa2: Link UP May 17 00:17:59.113462 systemd-networkd[1242]: cali4e9afa9cfa2: Gained carrier May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.049 [INFO][4408] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.058 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0 goldmane-8f77d7b6c- calico-system 1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64 952 0 2025-05-17 00:17:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-82l2f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4e9afa9cfa2 [] [] }} ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.058 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.080 [INFO][4421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" HandleID="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.080 [INFO][4421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" HandleID="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-82l2f", "timestamp":"2025-05-17 00:17:59.080333233 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.080 [INFO][4421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.080 [INFO][4421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.080 [INFO][4421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.086 [INFO][4421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.090 [INFO][4421] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.094 [INFO][4421] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.095 [INFO][4421] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.097 [INFO][4421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.097 [INFO][4421] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.098 [INFO][4421] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.102 [INFO][4421] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.106 [INFO][4421] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.106 [INFO][4421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" host="localhost" May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.106 [INFO][4421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:59.125675 containerd[1579]: 2025-05-17 00:17:59.106 [INFO][4421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" HandleID="k8s-pod-network.5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.126461 containerd[1579]: 2025-05-17 00:17:59.109 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-82l2f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e9afa9cfa2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:59.126461 containerd[1579]: 2025-05-17 00:17:59.109 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.126461 containerd[1579]: 2025-05-17 00:17:59.109 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e9afa9cfa2 ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.126461 containerd[1579]: 2025-05-17 00:17:59.113 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.126461 containerd[1579]: 2025-05-17 00:17:59.114 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff", Pod:"goldmane-8f77d7b6c-82l2f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e9afa9cfa2", MAC:"06:f0:74:ac:12:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:59.126461 containerd[1579]: 2025-05-17 00:17:59.122 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff" Namespace="calico-system" Pod="goldmane-8f77d7b6c-82l2f" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:17:59.144957 containerd[1579]: time="2025-05-17T00:17:59.144849854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:59.144957 containerd[1579]: time="2025-05-17T00:17:59.144922049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:59.144957 containerd[1579]: time="2025-05-17T00:17:59.144935515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:59.145207 containerd[1579]: time="2025-05-17T00:17:59.145040070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:59.175212 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:59.203831 containerd[1579]: time="2025-05-17T00:17:59.203730715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-82l2f,Uid:1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64,Namespace:calico-system,Attempt:1,} returns sandbox id \"5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff\"" May 17 00:17:59.205724 containerd[1579]: time="2025-05-17T00:17:59.205353937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:17:59.464428 containerd[1579]: time="2025-05-17T00:17:59.464298535Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:59.465406 containerd[1579]: time="2025-05-17T00:17:59.465378828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:59.465520 containerd[1579]: time="2025-05-17T00:17:59.465463557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:17:59.465680 kubelet[2724]: E0517 00:17:59.465616 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:59.466111 kubelet[2724]: E0517 00:17:59.465687 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:59.466111 kubelet[2724]: E0517 00:17:59.465830 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n49l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-82l2f_calico-system(1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:59.467044 kubelet[2724]: E0517 00:17:59.467008 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:17:59.750222 kubelet[2724]: E0517 00:17:59.750078 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:18:00.752547 kubelet[2724]: E0517 00:18:00.752202 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:18:00.814418 systemd-networkd[1242]: cali4e9afa9cfa2: Gained IPv6LL May 17 00:18:00.932842 containerd[1579]: time="2025-05-17T00:18:00.932784872Z" level=info msg="StopPodSandbox for \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\"" May 17 00:18:00.935350 containerd[1579]: time="2025-05-17T00:18:00.932784952Z" level=info msg="StopPodSandbox for \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\"" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:00.984 [INFO][4529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:00.984 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" iface="eth0" netns="/var/run/netns/cni-1faa0ba9-6141-7f2e-cce4-ba2c9d63483f" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:00.984 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" iface="eth0" netns="/var/run/netns/cni-1faa0ba9-6141-7f2e-cce4-ba2c9d63483f" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:00.985 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" iface="eth0" netns="/var/run/netns/cni-1faa0ba9-6141-7f2e-cce4-ba2c9d63483f" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:00.985 [INFO][4529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:00.985 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:01.004 [INFO][4545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:01.004 [INFO][4545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:01.004 [INFO][4545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:01.009 [WARNING][4545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:01.009 [INFO][4545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:01.010 [INFO][4545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:01.015502 containerd[1579]: 2025-05-17 00:18:01.012 [INFO][4529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:01.017047 containerd[1579]: time="2025-05-17T00:18:01.016371365Z" level=info msg="TearDown network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\" successfully" May 17 00:18:01.017047 containerd[1579]: time="2025-05-17T00:18:01.016402744Z" level=info msg="StopPodSandbox for \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\" returns successfully" May 17 00:18:01.017096 kubelet[2724]: E0517 00:18:01.016705 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:01.017924 containerd[1579]: time="2025-05-17T00:18:01.017121562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vgszq,Uid:5188de2f-1d4a-4fed-8a5e-e1444595d2e7,Namespace:kube-system,Attempt:1,}" May 17 00:18:01.019167 systemd[1]: run-netns-cni\x2d1faa0ba9\x2d6141\x2d7f2e\x2dcce4\x2dba2c9d63483f.mount: Deactivated successfully. May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:00.983 [INFO][4528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:00.983 [INFO][4528] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" iface="eth0" netns="/var/run/netns/cni-7a3d8171-bcf2-c078-bbba-1c00de14d26d" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:00.983 [INFO][4528] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" iface="eth0" netns="/var/run/netns/cni-7a3d8171-bcf2-c078-bbba-1c00de14d26d" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:00.984 [INFO][4528] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" iface="eth0" netns="/var/run/netns/cni-7a3d8171-bcf2-c078-bbba-1c00de14d26d" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:00.984 [INFO][4528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:00.984 [INFO][4528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:01.004 [INFO][4546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:01.004 [INFO][4546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:01.010 [INFO][4546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:01.075 [WARNING][4546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:01.075 [INFO][4546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:01.092 [INFO][4546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:01.097775 containerd[1579]: 2025-05-17 00:18:01.094 [INFO][4528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:01.098324 containerd[1579]: time="2025-05-17T00:18:01.098185215Z" level=info msg="TearDown network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\" successfully" May 17 00:18:01.098324 containerd[1579]: time="2025-05-17T00:18:01.098211484Z" level=info msg="StopPodSandbox for \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\" returns successfully" May 17 00:18:01.099151 containerd[1579]: time="2025-05-17T00:18:01.098962322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-87trf,Uid:ab4da613-d8f1-4a47-86db-18da03ede1ec,Namespace:calico-apiserver,Attempt:1,}" May 17 00:18:01.100773 systemd[1]: run-netns-cni\x2d7a3d8171\x2dbcf2\x2dc078\x2dbbba\x2d1c00de14d26d.mount: Deactivated successfully. May 17 00:18:01.633084 systemd-networkd[1242]: cali206766b87f1: Link UP May 17 00:18:01.635147 systemd-networkd[1242]: cali206766b87f1: Gained carrier May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.562 [INFO][4585] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.573 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0 coredns-7c65d6cfc9- kube-system 5188de2f-1d4a-4fed-8a5e-e1444595d2e7 983 0 2025-05-17 00:17:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-vgszq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali206766b87f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.573 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.602 [INFO][4611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" HandleID="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.603 [INFO][4611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" HandleID="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-vgszq", "timestamp":"2025-05-17 00:18:01.602856702 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.603 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.603 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.603 [INFO][4611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.609 [INFO][4611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.612 [INFO][4611] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.616 [INFO][4611] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.617 [INFO][4611] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.619 [INFO][4611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.619 [INFO][4611] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.620 [INFO][4611] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314 May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.623 [INFO][4611] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.627 [INFO][4611] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.627 [INFO][4611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" host="localhost" May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.627 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:01.645714 containerd[1579]: 2025-05-17 00:18:01.627 [INFO][4611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" HandleID="k8s-pod-network.1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.646299 containerd[1579]: 2025-05-17 00:18:01.630 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5188de2f-1d4a-4fed-8a5e-e1444595d2e7", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-vgszq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali206766b87f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:01.646299 containerd[1579]: 2025-05-17 00:18:01.630 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.646299 containerd[1579]: 2025-05-17 00:18:01.630 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali206766b87f1 ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.646299 containerd[1579]: 2025-05-17 00:18:01.634 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.646299 containerd[1579]: 2025-05-17 00:18:01.635 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5188de2f-1d4a-4fed-8a5e-e1444595d2e7", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314", Pod:"coredns-7c65d6cfc9-vgszq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali206766b87f1", MAC:"ca:e9:98:eb:93:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:01.646299 containerd[1579]: 2025-05-17 00:18:01.643 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vgszq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:01.662003 containerd[1579]: time="2025-05-17T00:18:01.661403994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:18:01.662003 containerd[1579]: time="2025-05-17T00:18:01.661978691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:18:01.662003 containerd[1579]: time="2025-05-17T00:18:01.661992827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:01.662217 containerd[1579]: time="2025-05-17T00:18:01.662079680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:01.694871 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:18:01.723561 containerd[1579]: time="2025-05-17T00:18:01.723445011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vgszq,Uid:5188de2f-1d4a-4fed-8a5e-e1444595d2e7,Namespace:kube-system,Attempt:1,} returns sandbox id \"1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314\"" May 17 00:18:01.724374 kubelet[2724]: E0517 00:18:01.724340 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:01.726708 containerd[1579]: time="2025-05-17T00:18:01.726670275Z" level=info msg="CreateContainer within sandbox \"1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:18:01.740863 systemd-networkd[1242]: calic5bc61f51f9: Link UP May 17 00:18:01.741235 systemd-networkd[1242]: calic5bc61f51f9: Gained carrier May 17 00:18:01.745646 containerd[1579]: time="2025-05-17T00:18:01.745514818Z" level=info msg="CreateContainer within sandbox \"1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72b6ca8fa6f1f1639f76a3841caac975a3314ad51417afe0ca8e9577df21f95e\"" May 17 00:18:01.746193 containerd[1579]: time="2025-05-17T00:18:01.746177720Z" level=info msg="StartContainer for \"72b6ca8fa6f1f1639f76a3841caac975a3314ad51417afe0ca8e9577df21f95e\"" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.569 [INFO][4595] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.582 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0 calico-apiserver-6fcc4f48fc- calico-apiserver ab4da613-d8f1-4a47-86db-18da03ede1ec 982 0 2025-05-17 00:17:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcc4f48fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fcc4f48fc-87trf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5bc61f51f9 [] [] }} ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.582 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.603 [INFO][4617] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" HandleID="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.603 [INFO][4617] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" HandleID="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fcc4f48fc-87trf", "timestamp":"2025-05-17 00:18:01.603424136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.603 [INFO][4617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.627 [INFO][4617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.628 [INFO][4617] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.710 [INFO][4617] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.714 [INFO][4617] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.718 [INFO][4617] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.720 [INFO][4617] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.722 [INFO][4617] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.722 [INFO][4617] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.724 [INFO][4617] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5 May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.731 [INFO][4617] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.735 [INFO][4617] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.735 [INFO][4617] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" host="localhost" May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.735 [INFO][4617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:01.756555 containerd[1579]: 2025-05-17 00:18:01.736 [INFO][4617] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" HandleID="k8s-pod-network.092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.757142 containerd[1579]: 2025-05-17 00:18:01.738 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab4da613-d8f1-4a47-86db-18da03ede1ec", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fcc4f48fc-87trf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5bc61f51f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:01.757142 containerd[1579]: 2025-05-17 00:18:01.738 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.757142 containerd[1579]: 2025-05-17 00:18:01.739 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5bc61f51f9 ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.757142 containerd[1579]: 2025-05-17 00:18:01.741 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.757142 containerd[1579]: 2025-05-17 00:18:01.741 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab4da613-d8f1-4a47-86db-18da03ede1ec", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5", Pod:"calico-apiserver-6fcc4f48fc-87trf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5bc61f51f9", MAC:"fa:7a:aa:db:b5:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:01.757142 containerd[1579]: 2025-05-17 00:18:01.753 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-87trf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:01.779694 containerd[1579]: time="2025-05-17T00:18:01.779508809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:18:01.779694 containerd[1579]: time="2025-05-17T00:18:01.779573059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:18:01.779694 containerd[1579]: time="2025-05-17T00:18:01.779589630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:01.780198 containerd[1579]: time="2025-05-17T00:18:01.780133468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:01.800235 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:18:01.820236 containerd[1579]: time="2025-05-17T00:18:01.820182486Z" level=info msg="StartContainer for \"72b6ca8fa6f1f1639f76a3841caac975a3314ad51417afe0ca8e9577df21f95e\" returns successfully" May 17 00:18:01.830103 containerd[1579]: time="2025-05-17T00:18:01.829972091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-87trf,Uid:ab4da613-d8f1-4a47-86db-18da03ede1ec,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5\"" May 17 00:18:01.831449 containerd[1579]: time="2025-05-17T00:18:01.831414444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:18:01.932190 containerd[1579]: time="2025-05-17T00:18:01.932044052Z" level=info msg="StopPodSandbox for \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\"" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.971 [INFO][4770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.972 [INFO][4770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" iface="eth0" netns="/var/run/netns/cni-4bd56b7e-94dd-4775-1581-02b30c9e6d51" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.972 [INFO][4770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" iface="eth0" netns="/var/run/netns/cni-4bd56b7e-94dd-4775-1581-02b30c9e6d51" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.973 [INFO][4770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" iface="eth0" netns="/var/run/netns/cni-4bd56b7e-94dd-4775-1581-02b30c9e6d51" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.973 [INFO][4770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.973 [INFO][4770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.995 [INFO][4779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.996 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:01.996 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:02.003 [WARNING][4779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:02.003 [INFO][4779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:02.004 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:02.009811 containerd[1579]: 2025-05-17 00:18:02.007 [INFO][4770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:02.010636 containerd[1579]: time="2025-05-17T00:18:02.009968104Z" level=info msg="TearDown network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\" successfully" May 17 00:18:02.010636 containerd[1579]: time="2025-05-17T00:18:02.009996989Z" level=info msg="StopPodSandbox for \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\" returns successfully" May 17 00:18:02.010692 kubelet[2724]: E0517 00:18:02.010392 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:02.011032 containerd[1579]: time="2025-05-17T00:18:02.010938523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm6b4,Uid:d70a794c-b705-4096-ab09-a29d9b66f140,Namespace:kube-system,Attempt:1,}" May 17 00:18:02.021049 systemd[1]: run-netns-cni\x2d4bd56b7e\x2d94dd\x2d4775\x2d1581\x2d02b30c9e6d51.mount: Deactivated successfully. May 17 00:18:02.184498 systemd-networkd[1242]: calid0158dd7036: Link UP May 17 00:18:02.186701 systemd-networkd[1242]: calid0158dd7036: Gained carrier May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.047 [INFO][4789] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.058 [INFO][4789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0 coredns-7c65d6cfc9- kube-system d70a794c-b705-4096-ab09-a29d9b66f140 1005 0 2025-05-17 00:17:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mm6b4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid0158dd7036 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.058 [INFO][4789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.084 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" HandleID="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.084 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" HandleID="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad160), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mm6b4", "timestamp":"2025-05-17 00:18:02.084395581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.084 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.084 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.084 [INFO][4801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.092 [INFO][4801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.100 [INFO][4801] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.106 [INFO][4801] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.108 [INFO][4801] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.121 [INFO][4801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.121 [INFO][4801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.129 [INFO][4801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304 May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.137 [INFO][4801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.155 [INFO][4801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.155 [INFO][4801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" host="localhost" May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.155 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:02.201829 containerd[1579]: 2025-05-17 00:18:02.155 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" HandleID="k8s-pod-network.47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.202620 containerd[1579]: 2025-05-17 00:18:02.173 [INFO][4789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d70a794c-b705-4096-ab09-a29d9b66f140", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mm6b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0158dd7036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:02.202620 containerd[1579]: 2025-05-17 00:18:02.173 [INFO][4789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.202620 containerd[1579]: 2025-05-17 00:18:02.173 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0158dd7036 ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.202620 containerd[1579]: 2025-05-17 00:18:02.185 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.202620 containerd[1579]: 2025-05-17 00:18:02.185 [INFO][4789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d70a794c-b705-4096-ab09-a29d9b66f140", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304", Pod:"coredns-7c65d6cfc9-mm6b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0158dd7036", MAC:"2a:77:ab:fa:f1:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:02.202620 containerd[1579]: 2025-05-17 00:18:02.197 [INFO][4789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm6b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:02.230654 containerd[1579]: time="2025-05-17T00:18:02.230547921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:18:02.230654 containerd[1579]: time="2025-05-17T00:18:02.230605158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:18:02.230654 containerd[1579]: time="2025-05-17T00:18:02.230618904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:02.231180 containerd[1579]: time="2025-05-17T00:18:02.230998735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:02.263117 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:18:02.294008 containerd[1579]: time="2025-05-17T00:18:02.293925500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm6b4,Uid:d70a794c-b705-4096-ab09-a29d9b66f140,Namespace:kube-system,Attempt:1,} returns sandbox id \"47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304\"" May 17 00:18:02.294757 kubelet[2724]: E0517 00:18:02.294722 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:02.296849 containerd[1579]: time="2025-05-17T00:18:02.296807741Z" level=info msg="CreateContainer within sandbox \"47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:18:02.442519 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:35018.service - OpenSSH per-connection server daemon (10.0.0.1:35018). May 17 00:18:02.524704 sshd[4878]: Accepted publickey for core from 10.0.0.1 port 35018 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:02.526574 sshd[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:02.530534 systemd-logind[1564]: New session 9 of user core. May 17 00:18:02.538538 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:18:02.762720 kubelet[2724]: E0517 00:18:02.762601 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:02.813429 sshd[4878]: pam_unix(sshd:session): session closed for user core May 17 00:18:02.817803 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:35018.service: Deactivated successfully. May 17 00:18:02.820236 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:18:02.820397 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. May 17 00:18:02.821661 systemd-logind[1564]: Removed session 9. May 17 00:18:02.845439 kubelet[2724]: I0517 00:18:02.845323 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vgszq" podStartSLOduration=35.845283846 podStartE2EDuration="35.845283846s" podCreationTimestamp="2025-05-17 00:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:18:02.845069294 +0000 UTC m=+42.005839989" watchObservedRunningTime="2025-05-17 00:18:02.845283846 +0000 UTC m=+42.006054541" May 17 00:18:02.931986 containerd[1579]: time="2025-05-17T00:18:02.931916776Z" level=info msg="StopPodSandbox for \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\"" May 17 00:18:03.055586 containerd[1579]: time="2025-05-17T00:18:03.055462767Z" level=info msg="CreateContainer within sandbox \"47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fef2006ca60b5a14ac58e322fe7cc1cb7df01b5e84a4c61f24abe35ec009df4d\"" May 17 00:18:03.056478 containerd[1579]: time="2025-05-17T00:18:03.055999713Z" level=info msg="StartContainer for \"fef2006ca60b5a14ac58e322fe7cc1cb7df01b5e84a4c61f24abe35ec009df4d\"" May 17 00:18:03.118385 systemd-networkd[1242]: calic5bc61f51f9: Gained IPv6LL May 17 00:18:03.163367 containerd[1579]: time="2025-05-17T00:18:03.163329652Z" level=info msg="StartContainer for \"fef2006ca60b5a14ac58e322fe7cc1cb7df01b5e84a4c61f24abe35ec009df4d\" returns successfully" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.132 [INFO][4908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.132 [INFO][4908] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" iface="eth0" netns="/var/run/netns/cni-8acb8658-f9a1-a02b-45d5-8448344440bd" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.132 [INFO][4908] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" iface="eth0" netns="/var/run/netns/cni-8acb8658-f9a1-a02b-45d5-8448344440bd" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.133 [INFO][4908] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" iface="eth0" netns="/var/run/netns/cni-8acb8658-f9a1-a02b-45d5-8448344440bd" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.133 [INFO][4908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.133 [INFO][4908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.153 [INFO][4956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.153 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.153 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.179 [WARNING][4956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.179 [INFO][4956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.180 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:03.185402 containerd[1579]: 2025-05-17 00:18:03.182 [INFO][4908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:03.185889 containerd[1579]: time="2025-05-17T00:18:03.185545947Z" level=info msg="TearDown network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\" successfully" May 17 00:18:03.185889 containerd[1579]: time="2025-05-17T00:18:03.185567718Z" level=info msg="StopPodSandbox for \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\" returns successfully" May 17 00:18:03.186304 containerd[1579]: time="2025-05-17T00:18:03.186283289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bc89478d-f479x,Uid:3d2d9321-e897-4d8e-ae8f-ddb6087819df,Namespace:calico-system,Attempt:1,}" May 17 00:18:03.188499 systemd[1]: run-netns-cni\x2d8acb8658\x2df9a1\x2da02b\x2d45d5\x2d8448344440bd.mount: Deactivated successfully. May 17 00:18:03.330331 systemd-networkd[1242]: caliac72e5a8af1: Link UP May 17 00:18:03.331123 systemd-networkd[1242]: caliac72e5a8af1: Gained carrier May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.266 [INFO][4981] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.274 [INFO][4981] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0 calico-kube-controllers-57bc89478d- calico-system 3d2d9321-e897-4d8e-ae8f-ddb6087819df 1030 0 2025-05-17 00:17:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57bc89478d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57bc89478d-f479x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliac72e5a8af1 [] [] }} ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.274 [INFO][4981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.296 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" HandleID="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.296 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" HandleID="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57bc89478d-f479x", "timestamp":"2025-05-17 00:18:03.29672048 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.296 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.296 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.296 [INFO][5003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.302 [INFO][5003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.306 [INFO][5003] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.310 [INFO][5003] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.311 [INFO][5003] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.314 [INFO][5003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.314 [INFO][5003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.316 [INFO][5003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9 May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.319 [INFO][5003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.325 [INFO][5003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.325 [INFO][5003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" host="localhost" May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.325 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:03.341557 containerd[1579]: 2025-05-17 00:18:03.325 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" HandleID="k8s-pod-network.9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.342215 containerd[1579]: 2025-05-17 00:18:03.328 [INFO][4981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0", GenerateName:"calico-kube-controllers-57bc89478d-", Namespace:"calico-system", SelfLink:"", UID:"3d2d9321-e897-4d8e-ae8f-ddb6087819df", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57bc89478d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57bc89478d-f479x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac72e5a8af1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:03.342215 containerd[1579]: 2025-05-17 00:18:03.328 [INFO][4981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.342215 containerd[1579]: 2025-05-17 00:18:03.328 [INFO][4981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac72e5a8af1 ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.342215 containerd[1579]: 2025-05-17 00:18:03.330 [INFO][4981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.342215 containerd[1579]: 2025-05-17 00:18:03.330 [INFO][4981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0", GenerateName:"calico-kube-controllers-57bc89478d-", Namespace:"calico-system", SelfLink:"", UID:"3d2d9321-e897-4d8e-ae8f-ddb6087819df", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57bc89478d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9", Pod:"calico-kube-controllers-57bc89478d-f479x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac72e5a8af1", MAC:"e6:9b:a9:d1:d3:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:03.342215 containerd[1579]: 2025-05-17 00:18:03.338 [INFO][4981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9" Namespace="calico-system" Pod="calico-kube-controllers-57bc89478d-f479x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:03.361670 containerd[1579]: time="2025-05-17T00:18:03.361580255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:18:03.361670 containerd[1579]: time="2025-05-17T00:18:03.361647570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:18:03.361670 containerd[1579]: time="2025-05-17T00:18:03.361662348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:03.362394 containerd[1579]: time="2025-05-17T00:18:03.362339888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:03.385825 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:18:03.409956 containerd[1579]: time="2025-05-17T00:18:03.409915690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bc89478d-f479x,Uid:3d2d9321-e897-4d8e-ae8f-ddb6087819df,Namespace:calico-system,Attempt:1,} returns sandbox id \"9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9\"" May 17 00:18:03.566383 systemd-networkd[1242]: calid0158dd7036: Gained IPv6LL May 17 00:18:03.566690 systemd-networkd[1242]: cali206766b87f1: Gained IPv6LL May 17 00:18:03.769088 kubelet[2724]: E0517 00:18:03.768973 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:03.774106 kubelet[2724]: E0517 00:18:03.774057 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:03.827200 kubelet[2724]: I0517 00:18:03.827130 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mm6b4" podStartSLOduration=36.827112872 podStartE2EDuration="36.827112872s" podCreationTimestamp="2025-05-17 00:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:18:03.824656148 +0000 UTC m=+42.985426943" watchObservedRunningTime="2025-05-17 00:18:03.827112872 +0000 UTC m=+42.987883567" May 17 00:18:03.931676 containerd[1579]: time="2025-05-17T00:18:03.931635359Z" level=info msg="StopPodSandbox for \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\"" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.235 [INFO][5073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.237 [INFO][5073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" iface="eth0" netns="/var/run/netns/cni-9e984ecc-6823-6d54-fd24-f805082d6c9a" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.237 [INFO][5073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" iface="eth0" netns="/var/run/netns/cni-9e984ecc-6823-6d54-fd24-f805082d6c9a" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.237 [INFO][5073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" iface="eth0" netns="/var/run/netns/cni-9e984ecc-6823-6d54-fd24-f805082d6c9a" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.237 [INFO][5073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.237 [INFO][5073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.285 [INFO][5084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.286 [INFO][5084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.286 [INFO][5084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.293 [WARNING][5084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.294 [INFO][5084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.298 [INFO][5084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:04.310226 containerd[1579]: 2025-05-17 00:18:04.305 [INFO][5073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:04.311404 containerd[1579]: time="2025-05-17T00:18:04.311283953Z" level=info msg="TearDown network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\" successfully" May 17 00:18:04.311404 containerd[1579]: time="2025-05-17T00:18:04.311332253Z" level=info msg="StopPodSandbox for \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\" returns successfully" May 17 00:18:04.315700 systemd[1]: run-netns-cni\x2d9e984ecc\x2d6823\x2d6d54\x2dfd24\x2df805082d6c9a.mount: Deactivated successfully. May 17 00:18:04.316633 containerd[1579]: time="2025-05-17T00:18:04.316593003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-xtls5,Uid:deceb09e-4340-4f28-8a23-a33b54df6910,Namespace:calico-apiserver,Attempt:1,}" May 17 00:18:04.775904 kubelet[2724]: E0517 00:18:04.775873 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:04.776476 kubelet[2724]: E0517 00:18:04.776028 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:04.879944 systemd-networkd[1242]: cali4881db92099: Link UP May 17 00:18:04.880154 systemd-networkd[1242]: cali4881db92099: Gained carrier May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.518 [INFO][5118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.529 [INFO][5118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0 calico-apiserver-6fcc4f48fc- calico-apiserver deceb09e-4340-4f28-8a23-a33b54df6910 1054 0 2025-05-17 00:17:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcc4f48fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fcc4f48fc-xtls5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4881db92099 [] [] }} ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.529 [INFO][5118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.554 [INFO][5131] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" HandleID="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.554 [INFO][5131] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" HandleID="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059f2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fcc4f48fc-xtls5", "timestamp":"2025-05-17 00:18:04.554173518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.554 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.554 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.554 [INFO][5131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.617 [INFO][5131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.623 [INFO][5131] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.627 [INFO][5131] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.629 [INFO][5131] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.631 [INFO][5131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.631 [INFO][5131] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.632 [INFO][5131] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.682 [INFO][5131] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.874 [INFO][5131] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.874 [INFO][5131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" host="localhost" May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.874 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:04.958833 containerd[1579]: 2025-05-17 00:18:04.874 [INFO][5131] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" HandleID="k8s-pod-network.f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:05.371811 containerd[1579]: 2025-05-17 00:18:04.878 [INFO][5118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"deceb09e-4340-4f28-8a23-a33b54df6910", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fcc4f48fc-xtls5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4881db92099", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:05.371811 containerd[1579]: 2025-05-17 00:18:04.878 [INFO][5118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:05.371811 containerd[1579]: 2025-05-17 00:18:04.878 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4881db92099 ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:05.371811 containerd[1579]: 2025-05-17 00:18:04.880 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:05.371811 containerd[1579]: 2025-05-17 00:18:04.884 [INFO][5118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"deceb09e-4340-4f28-8a23-a33b54df6910", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c", Pod:"calico-apiserver-6fcc4f48fc-xtls5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4881db92099", MAC:"c2:68:8b:e2:51:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:05.371811 containerd[1579]: 2025-05-17 00:18:04.955 [INFO][5118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc4f48fc-xtls5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:04.975403 systemd-networkd[1242]: caliac72e5a8af1: Gained IPv6LL May 17 00:18:05.705981 containerd[1579]: time="2025-05-17T00:18:05.705776033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:18:05.705981 containerd[1579]: time="2025-05-17T00:18:05.705861838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:18:05.705981 containerd[1579]: time="2025-05-17T00:18:05.705887437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:05.706317 containerd[1579]: time="2025-05-17T00:18:05.706026125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:05.713539 containerd[1579]: time="2025-05-17T00:18:05.713488342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:05.714535 containerd[1579]: time="2025-05-17T00:18:05.714496302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:18:05.715619 containerd[1579]: time="2025-05-17T00:18:05.715579046Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:05.718331 containerd[1579]: time="2025-05-17T00:18:05.718230662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:05.719106 containerd[1579]: time="2025-05-17T00:18:05.718882998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.887433217s" May 17 00:18:05.719106 containerd[1579]: time="2025-05-17T00:18:05.718925300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:18:05.720961 containerd[1579]: time="2025-05-17T00:18:05.720311528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:18:05.723661 containerd[1579]: time="2025-05-17T00:18:05.723215790Z" level=info msg="CreateContainer within sandbox \"092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:18:05.736064 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:18:05.743293 containerd[1579]: time="2025-05-17T00:18:05.743233610Z" level=info msg="CreateContainer within sandbox \"092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"84bdc773e1e00ec2315ad305ac21d81195eb88885bcafc9713bd4c1096f4e187\"" May 17 00:18:05.745004 containerd[1579]: time="2025-05-17T00:18:05.744967087Z" level=info msg="StartContainer for \"84bdc773e1e00ec2315ad305ac21d81195eb88885bcafc9713bd4c1096f4e187\"" May 17 00:18:05.768963 containerd[1579]: time="2025-05-17T00:18:05.768919603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc4f48fc-xtls5,Uid:deceb09e-4340-4f28-8a23-a33b54df6910,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c\"" May 17 00:18:05.773229 containerd[1579]: time="2025-05-17T00:18:05.772714770Z" level=info msg="CreateContainer within sandbox \"f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:18:05.781406 kubelet[2724]: E0517 00:18:05.781174 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:05.793641 containerd[1579]: time="2025-05-17T00:18:05.793588388Z" level=info msg="CreateContainer within sandbox \"f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e4f7e52739478939ce62c1482d61ad88f42cbd32ab29258d479430c8c3ffff54\"" May 17 00:18:05.794288 containerd[1579]: time="2025-05-17T00:18:05.794225624Z" level=info msg="StartContainer for \"e4f7e52739478939ce62c1482d61ad88f42cbd32ab29258d479430c8c3ffff54\"" May 17 00:18:05.815707 containerd[1579]: time="2025-05-17T00:18:05.815654512Z" level=info msg="StartContainer for \"84bdc773e1e00ec2315ad305ac21d81195eb88885bcafc9713bd4c1096f4e187\" returns successfully" May 17 00:18:05.869154 containerd[1579]: time="2025-05-17T00:18:05.869100632Z" level=info msg="StartContainer for \"e4f7e52739478939ce62c1482d61ad88f42cbd32ab29258d479430c8c3ffff54\" returns successfully" May 17 00:18:05.932768 containerd[1579]: time="2025-05-17T00:18:05.932710119Z" level=info msg="StopPodSandbox for \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\"" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:05.976 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:05.976 [INFO][5314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" iface="eth0" netns="/var/run/netns/cni-d77155f3-f869-16d7-7be7-9e2bd1e78449" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:05.976 [INFO][5314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" iface="eth0" netns="/var/run/netns/cni-d77155f3-f869-16d7-7be7-9e2bd1e78449" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:05.977 [INFO][5314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" iface="eth0" netns="/var/run/netns/cni-d77155f3-f869-16d7-7be7-9e2bd1e78449" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:05.977 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:05.977 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:06.001 [INFO][5322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:06.001 [INFO][5322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:06.001 [INFO][5322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:06.007 [WARNING][5322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:06.007 [INFO][5322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:06.008 [INFO][5322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:06.013882 containerd[1579]: 2025-05-17 00:18:06.011 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:06.014444 containerd[1579]: time="2025-05-17T00:18:06.014002772Z" level=info msg="TearDown network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\" successfully" May 17 00:18:06.014444 containerd[1579]: time="2025-05-17T00:18:06.014074631Z" level=info msg="StopPodSandbox for \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\" returns successfully" May 17 00:18:06.015579 containerd[1579]: time="2025-05-17T00:18:06.015544399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdx7n,Uid:5d2460d1-6b11-4f05-a6fd-bf4b83ac6776,Namespace:calico-system,Attempt:1,}" May 17 00:18:06.315721 systemd-networkd[1242]: calia05c812fa6a: Link UP May 17 00:18:06.317374 systemd-networkd[1242]: calia05c812fa6a: Gained carrier May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.048 [INFO][5330] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.059 [INFO][5330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cdx7n-eth0 csi-node-driver- calico-system 5d2460d1-6b11-4f05-a6fd-bf4b83ac6776 1071 0 2025-05-17 00:17:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cdx7n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia05c812fa6a [] [] }} ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.064 [INFO][5330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.088 [INFO][5343] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" HandleID="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.088 [INFO][5343] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" HandleID="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cdx7n", "timestamp":"2025-05-17 00:18:06.088580374 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.088 [INFO][5343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.088 [INFO][5343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.088 [INFO][5343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.140 [INFO][5343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.146 [INFO][5343] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.149 [INFO][5343] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.151 [INFO][5343] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.153 [INFO][5343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.153 [INFO][5343] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.154 [INFO][5343] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.203 [INFO][5343] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.309 [INFO][5343] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.309 [INFO][5343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" host="localhost" May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.309 [INFO][5343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:06.330387 containerd[1579]: 2025-05-17 00:18:06.309 [INFO][5343] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" HandleID="k8s-pod-network.c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.330993 containerd[1579]: 2025-05-17 00:18:06.313 [INFO][5330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cdx7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cdx7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia05c812fa6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:06.330993 containerd[1579]: 2025-05-17 00:18:06.313 [INFO][5330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.330993 containerd[1579]: 2025-05-17 00:18:06.313 [INFO][5330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia05c812fa6a ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.330993 containerd[1579]: 2025-05-17 00:18:06.317 [INFO][5330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.330993 containerd[1579]: 2025-05-17 00:18:06.317 [INFO][5330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cdx7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf", Pod:"csi-node-driver-cdx7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia05c812fa6a", MAC:"ba:c6:77:7e:81:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:06.330993 containerd[1579]: 2025-05-17 00:18:06.326 [INFO][5330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf" Namespace="calico-system" Pod="csi-node-driver-cdx7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:06.352789 containerd[1579]: time="2025-05-17T00:18:06.352690881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:18:06.352789 containerd[1579]: time="2025-05-17T00:18:06.352756427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:18:06.352789 containerd[1579]: time="2025-05-17T00:18:06.352771266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:06.352897 containerd[1579]: time="2025-05-17T00:18:06.352854647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:18:06.383372 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:18:06.400616 containerd[1579]: time="2025-05-17T00:18:06.400569085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdx7n,Uid:5d2460d1-6b11-4f05-a6fd-bf4b83ac6776,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf\"" May 17 00:18:06.638407 systemd-networkd[1242]: cali4881db92099: Gained IPv6LL May 17 00:18:06.717612 systemd[1]: run-netns-cni\x2dd77155f3\x2df869\x2d16d7\x2d7be7\x2d9e2bd1e78449.mount: Deactivated successfully. May 17 00:18:06.802654 kubelet[2724]: I0517 00:18:06.802588 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-xtls5" podStartSLOduration=30.802565695 podStartE2EDuration="30.802565695s" podCreationTimestamp="2025-05-17 00:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:18:06.800235031 +0000 UTC m=+45.961005726" watchObservedRunningTime="2025-05-17 00:18:06.802565695 +0000 UTC m=+45.963336400" May 17 00:18:06.810344 kubelet[2724]: I0517 00:18:06.809983 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcc4f48fc-87trf" podStartSLOduration=26.921064077 podStartE2EDuration="30.809959861s" podCreationTimestamp="2025-05-17 00:17:36 +0000 UTC" firstStartedPulling="2025-05-17 00:18:01.831069477 +0000 UTC m=+40.991840172" lastFinishedPulling="2025-05-17 00:18:05.719965261 +0000 UTC m=+44.880735956" observedRunningTime="2025-05-17 00:18:06.809420934 +0000 UTC m=+45.970191639" watchObservedRunningTime="2025-05-17 00:18:06.809959861 +0000 UTC m=+45.970730556" May 17 00:18:07.263996 kubelet[2724]: I0517 00:18:07.263956 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:18:07.264347 kubelet[2724]: E0517 00:18:07.264316 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:07.795215 kubelet[2724]: E0517 00:18:07.795142 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:07.826542 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:35026.service - OpenSSH per-connection server daemon (10.0.0.1:35026). May 17 00:18:07.909407 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 35026 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:07.911084 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:07.918296 systemd-logind[1564]: New session 10 of user core. May 17 00:18:07.923009 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:18:08.039349 kernel: bpftool[5509]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:18:08.104714 sshd[5450]: pam_unix(sshd:session): session closed for user core May 17 00:18:08.110476 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. May 17 00:18:08.112486 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:35026.service: Deactivated successfully. May 17 00:18:08.117596 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:18:08.119094 systemd-logind[1564]: Removed session 10. May 17 00:18:08.238399 systemd-networkd[1242]: calia05c812fa6a: Gained IPv6LL May 17 00:18:08.342207 systemd-networkd[1242]: vxlan.calico: Link UP May 17 00:18:08.342216 systemd-networkd[1242]: vxlan.calico: Gained carrier May 17 00:18:09.239186 containerd[1579]: time="2025-05-17T00:18:09.239147299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:09.241231 containerd[1579]: time="2025-05-17T00:18:09.241208987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:18:09.243865 containerd[1579]: time="2025-05-17T00:18:09.243818778Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:09.249436 containerd[1579]: time="2025-05-17T00:18:09.249185392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:09.250393 containerd[1579]: time="2025-05-17T00:18:09.249579810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.529225629s" May 17 00:18:09.250448 containerd[1579]: time="2025-05-17T00:18:09.250398351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:18:09.253224 containerd[1579]: time="2025-05-17T00:18:09.253193077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:18:09.263431 containerd[1579]: time="2025-05-17T00:18:09.263184812Z" level=info msg="CreateContainer within sandbox \"9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:18:09.364547 containerd[1579]: time="2025-05-17T00:18:09.364478611Z" level=info msg="CreateContainer within sandbox \"9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"809966a6b80b1ffff52b58ac89b9b1f8b0f2080b341a888a2901522bdcae6959\"" May 17 00:18:09.365195 containerd[1579]: time="2025-05-17T00:18:09.365105855Z" level=info msg="StartContainer for \"809966a6b80b1ffff52b58ac89b9b1f8b0f2080b341a888a2901522bdcae6959\"" May 17 00:18:09.454944 containerd[1579]: time="2025-05-17T00:18:09.454857130Z" level=info msg="StartContainer for \"809966a6b80b1ffff52b58ac89b9b1f8b0f2080b341a888a2901522bdcae6959\" returns successfully" May 17 00:18:09.519053 systemd-networkd[1242]: vxlan.calico: Gained IPv6LL May 17 00:18:09.823549 kubelet[2724]: I0517 00:18:09.823064 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57bc89478d-f479x" podStartSLOduration=24.981764008 podStartE2EDuration="30.823044526s" podCreationTimestamp="2025-05-17 00:17:39 +0000 UTC" firstStartedPulling="2025-05-17 00:18:03.411125167 +0000 UTC m=+42.571895862" lastFinishedPulling="2025-05-17 00:18:09.252405695 +0000 UTC m=+48.413176380" observedRunningTime="2025-05-17 00:18:09.822842568 +0000 UTC m=+48.983613283" watchObservedRunningTime="2025-05-17 00:18:09.823044526 +0000 UTC m=+48.983815241" May 17 00:18:10.730679 containerd[1579]: time="2025-05-17T00:18:10.730627613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:10.731766 containerd[1579]: time="2025-05-17T00:18:10.731726552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:18:10.732838 containerd[1579]: time="2025-05-17T00:18:10.732812424Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:10.735051 containerd[1579]: time="2025-05-17T00:18:10.735001686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:10.735489 containerd[1579]: time="2025-05-17T00:18:10.735456018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.482232782s" May 17 00:18:10.735489 containerd[1579]: time="2025-05-17T00:18:10.735486266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:18:10.737442 containerd[1579]: time="2025-05-17T00:18:10.737407232Z" level=info msg="CreateContainer within sandbox \"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:18:10.762399 containerd[1579]: time="2025-05-17T00:18:10.762354842Z" level=info msg="CreateContainer within sandbox \"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ce87680237bfb321a5437fcafab8a8c9d3de9fca91bcd3a1785a7f938be5fd0\"" May 17 00:18:10.763048 containerd[1579]: time="2025-05-17T00:18:10.763003487Z" level=info msg="StartContainer for \"3ce87680237bfb321a5437fcafab8a8c9d3de9fca91bcd3a1785a7f938be5fd0\"" May 17 00:18:10.836288 containerd[1579]: time="2025-05-17T00:18:10.836221149Z" level=info msg="StartContainer for \"3ce87680237bfb321a5437fcafab8a8c9d3de9fca91bcd3a1785a7f938be5fd0\" returns successfully" May 17 00:18:10.838009 containerd[1579]: time="2025-05-17T00:18:10.837980225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:18:13.113449 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:59368.service - OpenSSH per-connection server daemon (10.0.0.1:59368). May 17 00:18:13.222995 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 59368 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:13.226685 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:13.231589 systemd-logind[1564]: New session 11 of user core. May 17 00:18:13.241508 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:18:13.379805 sshd[5704]: pam_unix(sshd:session): session closed for user core May 17 00:18:13.388636 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:59384.service - OpenSSH per-connection server daemon (10.0.0.1:59384). May 17 00:18:13.390194 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:59368.service: Deactivated successfully. May 17 00:18:13.394738 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:18:13.396131 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. May 17 00:18:13.397106 systemd-logind[1564]: Removed session 11. May 17 00:18:13.429348 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 59384 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:13.431074 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:13.435239 systemd-logind[1564]: New session 12 of user core. May 17 00:18:13.440490 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:18:13.654940 sshd[5718]: pam_unix(sshd:session): session closed for user core May 17 00:18:13.668518 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:59388.service - OpenSSH per-connection server daemon (10.0.0.1:59388). May 17 00:18:13.669171 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:59384.service: Deactivated successfully. May 17 00:18:13.678962 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:18:13.681633 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. May 17 00:18:13.683282 systemd-logind[1564]: Removed session 12. May 17 00:18:13.708137 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 59388 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:13.710214 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:13.715393 systemd-logind[1564]: New session 13 of user core. May 17 00:18:13.721648 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:18:14.879073 sshd[5735]: pam_unix(sshd:session): session closed for user core May 17 00:18:14.883162 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:59388.service: Deactivated successfully. May 17 00:18:14.885575 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. May 17 00:18:14.885753 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:18:14.887081 systemd-logind[1564]: Removed session 13. May 17 00:18:15.048364 containerd[1579]: time="2025-05-17T00:18:15.048295083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:15.102104 containerd[1579]: time="2025-05-17T00:18:15.102053199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:18:15.150343 systemd-resolved[1456]: Under memory pressure, flushing caches. May 17 00:18:15.161328 systemd-journald[1156]: Under memory pressure, flushing caches. May 17 00:18:15.150375 systemd-resolved[1456]: Flushed all caches. May 17 00:18:15.203959 containerd[1579]: time="2025-05-17T00:18:15.203902227Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:15.273813 containerd[1579]: time="2025-05-17T00:18:15.273746541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:15.274569 containerd[1579]: time="2025-05-17T00:18:15.274517747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 4.43649511s" May 17 00:18:15.274617 containerd[1579]: time="2025-05-17T00:18:15.274566540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:18:15.275523 containerd[1579]: time="2025-05-17T00:18:15.275499726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:18:15.276459 containerd[1579]: time="2025-05-17T00:18:15.276435846Z" level=info msg="CreateContainer within sandbox \"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:18:15.598685 containerd[1579]: time="2025-05-17T00:18:15.598632929Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:15.674878 containerd[1579]: time="2025-05-17T00:18:15.674779611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:18:15.674878 containerd[1579]: time="2025-05-17T00:18:15.674818485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:15.698982 containerd[1579]: time="2025-05-17T00:18:15.675481885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:18:15.699021 kubelet[2724]: E0517 00:18:15.675039 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:18:15.699021 kubelet[2724]: E0517 00:18:15.675103 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:18:15.699021 kubelet[2724]: E0517 00:18:15.675311 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5fa0e8b210c943fe9a524550ec7c8a90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plrtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f7f9c875b-6g4bk_calico-system(b3843bf5-7516-4c9f-923b-822352f7eab5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:16.071132 containerd[1579]: time="2025-05-17T00:18:16.071076159Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:16.140163 containerd[1579]: time="2025-05-17T00:18:16.140106666Z" level=info msg="CreateContainer within sandbox \"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cd17d8a477ea3d29728b02f1efd2986f16704b43269abe4c4e3f28bf9132cf03\"" May 17 00:18:16.140783 containerd[1579]: time="2025-05-17T00:18:16.140742162Z" level=info msg="StartContainer for \"cd17d8a477ea3d29728b02f1efd2986f16704b43269abe4c4e3f28bf9132cf03\"" May 17 00:18:16.173367 containerd[1579]: time="2025-05-17T00:18:16.173295734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:16.173527 containerd[1579]: time="2025-05-17T00:18:16.173367402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:18:16.173580 kubelet[2724]: E0517 00:18:16.173504 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:18:16.173580 kubelet[2724]: E0517 00:18:16.173557 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:18:16.174365 kubelet[2724]: E0517 00:18:16.173766 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n49l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-82l2f_calico-system(1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:16.175206 containerd[1579]: time="2025-05-17T00:18:16.174810622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:18:16.175285 kubelet[2724]: E0517 00:18:16.175155 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:18:16.457590 containerd[1579]: time="2025-05-17T00:18:16.457439965Z" level=info msg="StartContainer for \"cd17d8a477ea3d29728b02f1efd2986f16704b43269abe4c4e3f28bf9132cf03\" returns successfully" May 17 00:18:16.700919 containerd[1579]: time="2025-05-17T00:18:16.700846443Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:16.703280 containerd[1579]: time="2025-05-17T00:18:16.703209512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:16.703441 containerd[1579]: time="2025-05-17T00:18:16.703241534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:18:16.703514 kubelet[2724]: E0517 00:18:16.703459 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:18:16.703991 kubelet[2724]: E0517 00:18:16.703519 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:18:16.703991 kubelet[2724]: E0517 00:18:16.703632 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plrtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f7f9c875b-6g4bk_calico-system(b3843bf5-7516-4c9f-923b-822352f7eab5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:16.704849 kubelet[2724]: E0517 00:18:16.704788 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f7f9c875b-6g4bk" podUID="b3843bf5-7516-4c9f-923b-822352f7eab5" May 17 00:18:17.014882 kubelet[2724]: I0517 00:18:17.014845 2724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:18:17.014882 kubelet[2724]: I0517 00:18:17.014886 2724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:18:19.846466 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:34960.service - OpenSSH per-connection server daemon (10.0.0.1:34960). May 17 00:18:19.886284 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 34960 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:19.888184 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:19.892026 systemd-logind[1564]: New session 14 of user core. May 17 00:18:19.897508 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:18:20.081401 sshd[5834]: pam_unix(sshd:session): session closed for user core May 17 00:18:20.085828 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:34960.service: Deactivated successfully. May 17 00:18:20.087976 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. May 17 00:18:20.088035 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:18:20.088935 systemd-logind[1564]: Removed session 14. May 17 00:18:20.923011 containerd[1579]: time="2025-05-17T00:18:20.922958345Z" level=info msg="StopPodSandbox for \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\"" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.956 [WARNING][5859] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff", Pod:"goldmane-8f77d7b6c-82l2f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e9afa9cfa2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.957 [INFO][5859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.957 [INFO][5859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" iface="eth0" netns="" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.957 [INFO][5859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.957 [INFO][5859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.978 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.978 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.978 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.985 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.986 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.987 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:20.995359 containerd[1579]: 2025-05-17 00:18:20.991 [INFO][5859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:20.995957 containerd[1579]: time="2025-05-17T00:18:20.995389837Z" level=info msg="TearDown network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\" successfully" May 17 00:18:20.995957 containerd[1579]: time="2025-05-17T00:18:20.995415557Z" level=info msg="StopPodSandbox for \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\" returns successfully" May 17 00:18:20.995957 containerd[1579]: time="2025-05-17T00:18:20.995912144Z" level=info msg="RemovePodSandbox for \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\"" May 17 00:18:20.998508 containerd[1579]: time="2025-05-17T00:18:20.998474134Z" level=info msg="Forcibly stopping sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\"" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.033 [WARNING][5888] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b053db71d901b206d959b2f659c5c27acd18fbd22ac8338fa3de3d6c8b09fff", Pod:"goldmane-8f77d7b6c-82l2f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e9afa9cfa2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.033 [INFO][5888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.034 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" iface="eth0" netns="" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.034 [INFO][5888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.034 [INFO][5888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.052 [INFO][5897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.052 [INFO][5897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.052 [INFO][5897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.057 [WARNING][5897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.057 [INFO][5897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" HandleID="k8s-pod-network.9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" Workload="localhost-k8s-goldmane--8f77d7b6c--82l2f-eth0" May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.058 [INFO][5897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.063723 containerd[1579]: 2025-05-17 00:18:21.060 [INFO][5888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0" May 17 00:18:21.064177 containerd[1579]: time="2025-05-17T00:18:21.063749305Z" level=info msg="TearDown network for sandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\" successfully" May 17 00:18:21.146604 containerd[1579]: time="2025-05-17T00:18:21.146553875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:21.146779 containerd[1579]: time="2025-05-17T00:18:21.146633688Z" level=info msg="RemovePodSandbox \"9ee200a8ceea844d6ba8c7861e41aafbc9e5e5753174802ae53bd8a745dad2e0\" returns successfully" May 17 00:18:21.147078 containerd[1579]: time="2025-05-17T00:18:21.147048809Z" level=info msg="StopPodSandbox for \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\"" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.179 [WARNING][5914] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0", GenerateName:"calico-kube-controllers-57bc89478d-", Namespace:"calico-system", SelfLink:"", UID:"3d2d9321-e897-4d8e-ae8f-ddb6087819df", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57bc89478d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9", Pod:"calico-kube-controllers-57bc89478d-f479x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac72e5a8af1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.180 [INFO][5914] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.180 [INFO][5914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" iface="eth0" netns="" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.180 [INFO][5914] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.180 [INFO][5914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.201 [INFO][5923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.201 [INFO][5923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.201 [INFO][5923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.207 [WARNING][5923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.207 [INFO][5923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.208 [INFO][5923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.214322 containerd[1579]: 2025-05-17 00:18:21.211 [INFO][5914] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.214322 containerd[1579]: time="2025-05-17T00:18:21.214270050Z" level=info msg="TearDown network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\" successfully" May 17 00:18:21.214322 containerd[1579]: time="2025-05-17T00:18:21.214295178Z" level=info msg="StopPodSandbox for \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\" returns successfully" May 17 00:18:21.274055 containerd[1579]: time="2025-05-17T00:18:21.214922986Z" level=info msg="RemovePodSandbox for \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\"" May 17 00:18:21.274055 containerd[1579]: time="2025-05-17T00:18:21.214970767Z" level=info msg="Forcibly stopping sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\"" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.244 [WARNING][5941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0", GenerateName:"calico-kube-controllers-57bc89478d-", Namespace:"calico-system", SelfLink:"", UID:"3d2d9321-e897-4d8e-ae8f-ddb6087819df", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57bc89478d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9800c985274a3b809f516c99558cfb695c59eb03211dc2e8952622af98b924e9", Pod:"calico-kube-controllers-57bc89478d-f479x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac72e5a8af1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.244 [INFO][5941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.244 [INFO][5941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" iface="eth0" netns="" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.244 [INFO][5941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.244 [INFO][5941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.262 [INFO][5950] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.262 [INFO][5950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.262 [INFO][5950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.267 [WARNING][5950] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.267 [INFO][5950] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" HandleID="k8s-pod-network.d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" Workload="localhost-k8s-calico--kube--controllers--57bc89478d--f479x-eth0" May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.268 [INFO][5950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.274055 containerd[1579]: 2025-05-17 00:18:21.270 [INFO][5941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5" May 17 00:18:21.274055 containerd[1579]: time="2025-05-17T00:18:21.273111434Z" level=info msg="TearDown network for sandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\" successfully" May 17 00:18:21.437796 containerd[1579]: time="2025-05-17T00:18:21.437728291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:21.437958 containerd[1579]: time="2025-05-17T00:18:21.437819535Z" level=info msg="RemovePodSandbox \"d224d3edfa96d199449ba9821a112af5877ec9451530ec4e46e36213e971b9c5\" returns successfully" May 17 00:18:21.438556 containerd[1579]: time="2025-05-17T00:18:21.438499162Z" level=info msg="StopPodSandbox for \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\"" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.473 [WARNING][5968] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5188de2f-1d4a-4fed-8a5e-e1444595d2e7", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314", Pod:"coredns-7c65d6cfc9-vgszq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali206766b87f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.474 [INFO][5968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.474 [INFO][5968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" iface="eth0" netns="" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.474 [INFO][5968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.474 [INFO][5968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.495 [INFO][5976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.495 [INFO][5976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.495 [INFO][5976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.500 [WARNING][5976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.500 [INFO][5976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.502 [INFO][5976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.508372 containerd[1579]: 2025-05-17 00:18:21.505 [INFO][5968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.508372 containerd[1579]: time="2025-05-17T00:18:21.508352213Z" level=info msg="TearDown network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\" successfully" May 17 00:18:21.508824 containerd[1579]: time="2025-05-17T00:18:21.508383603Z" level=info msg="StopPodSandbox for \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\" returns successfully" May 17 00:18:21.509047 containerd[1579]: time="2025-05-17T00:18:21.509010599Z" level=info msg="RemovePodSandbox for \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\"" May 17 00:18:21.509089 containerd[1579]: time="2025-05-17T00:18:21.509073159Z" level=info msg="Forcibly stopping sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\"" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.544 [WARNING][5995] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5188de2f-1d4a-4fed-8a5e-e1444595d2e7", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1807014686a27a70aea8966d08f33985039dd14924c71f7cafa3dee5b4714314", Pod:"coredns-7c65d6cfc9-vgszq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali206766b87f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.545 [INFO][5995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.545 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" iface="eth0" netns="" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.545 [INFO][5995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.545 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.565 [INFO][6004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.565 [INFO][6004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.565 [INFO][6004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.571 [WARNING][6004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.571 [INFO][6004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" HandleID="k8s-pod-network.3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" Workload="localhost-k8s-coredns--7c65d6cfc9--vgszq-eth0" May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.572 [INFO][6004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.577738 containerd[1579]: 2025-05-17 00:18:21.574 [INFO][5995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76" May 17 00:18:21.578182 containerd[1579]: time="2025-05-17T00:18:21.577784462Z" level=info msg="TearDown network for sandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\" successfully" May 17 00:18:21.634338 containerd[1579]: time="2025-05-17T00:18:21.634276184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:21.634436 containerd[1579]: time="2025-05-17T00:18:21.634354274Z" level=info msg="RemovePodSandbox \"3410db18317da3323b1d58a6af66ad804ed82c926d7c9629e0ee489ff7707f76\" returns successfully" May 17 00:18:21.634947 containerd[1579]: time="2025-05-17T00:18:21.634904162Z" level=info msg="StopPodSandbox for \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\"" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.670 [WARNING][6022] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab4da613-d8f1-4a47-86db-18da03ede1ec", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5", Pod:"calico-apiserver-6fcc4f48fc-87trf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5bc61f51f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.671 [INFO][6022] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.671 [INFO][6022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" iface="eth0" netns="" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.671 [INFO][6022] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.671 [INFO][6022] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.692 [INFO][6030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.692 [INFO][6030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.692 [INFO][6030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.697 [WARNING][6030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.697 [INFO][6030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.698 [INFO][6030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.703914 containerd[1579]: 2025-05-17 00:18:21.701 [INFO][6022] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.704392 containerd[1579]: time="2025-05-17T00:18:21.703955895Z" level=info msg="TearDown network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\" successfully" May 17 00:18:21.704392 containerd[1579]: time="2025-05-17T00:18:21.703995081Z" level=info msg="StopPodSandbox for \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\" returns successfully" May 17 00:18:21.704505 containerd[1579]: time="2025-05-17T00:18:21.704477140Z" level=info msg="RemovePodSandbox for \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\"" May 17 00:18:21.704535 containerd[1579]: time="2025-05-17T00:18:21.704517106Z" level=info msg="Forcibly stopping sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\"" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.737 [WARNING][6047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab4da613-d8f1-4a47-86db-18da03ede1ec", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"092a46bf9aa371d6922133c780ce29f60f3774c3c30de40e99e4bdaccf06cfa5", Pod:"calico-apiserver-6fcc4f48fc-87trf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5bc61f51f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.738 [INFO][6047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.738 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" iface="eth0" netns="" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.738 [INFO][6047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.738 [INFO][6047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.760 [INFO][6055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.760 [INFO][6055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.760 [INFO][6055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.765 [WARNING][6055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.765 [INFO][6055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" HandleID="k8s-pod-network.d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--87trf-eth0" May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.767 [INFO][6055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.772696 containerd[1579]: 2025-05-17 00:18:21.769 [INFO][6047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364" May 17 00:18:21.772696 containerd[1579]: time="2025-05-17T00:18:21.772649696Z" level=info msg="TearDown network for sandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\" successfully" May 17 00:18:21.819265 containerd[1579]: time="2025-05-17T00:18:21.819212055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:21.819383 containerd[1579]: time="2025-05-17T00:18:21.819291897Z" level=info msg="RemovePodSandbox \"d30d618c0c5227e956b8481f02314a8de5f90e072eb4b78673a3f7a181fc3364\" returns successfully" May 17 00:18:21.819856 containerd[1579]: time="2025-05-17T00:18:21.819820736Z" level=info msg="StopPodSandbox for \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\"" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.851 [WARNING][6074] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cdx7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf", Pod:"csi-node-driver-cdx7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia05c812fa6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.851 [INFO][6074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.851 [INFO][6074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" iface="eth0" netns="" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.851 [INFO][6074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.851 [INFO][6074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.873 [INFO][6083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.873 [INFO][6083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.873 [INFO][6083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.879 [WARNING][6083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.880 [INFO][6083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.881 [INFO][6083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.886422 containerd[1579]: 2025-05-17 00:18:21.883 [INFO][6074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.886845 containerd[1579]: time="2025-05-17T00:18:21.886464816Z" level=info msg="TearDown network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\" successfully" May 17 00:18:21.886845 containerd[1579]: time="2025-05-17T00:18:21.886492328Z" level=info msg="StopPodSandbox for \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\" returns successfully" May 17 00:18:21.886958 containerd[1579]: time="2025-05-17T00:18:21.886924653Z" level=info msg="RemovePodSandbox for \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\"" May 17 00:18:21.886994 containerd[1579]: time="2025-05-17T00:18:21.886956584Z" level=info msg="Forcibly stopping sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\"" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.961 [WARNING][6100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cdx7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d2460d1-6b11-4f05-a6fd-bf4b83ac6776", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3ee894c7c8081014c1ca17b31849986daf3d34a7362c20da8820476bba8b7cf", Pod:"csi-node-driver-cdx7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia05c812fa6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.962 [INFO][6100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.962 [INFO][6100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" iface="eth0" netns="" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.962 [INFO][6100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.962 [INFO][6100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.982 [INFO][6108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.982 [INFO][6108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.982 [INFO][6108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.987 [WARNING][6108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.987 [INFO][6108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" HandleID="k8s-pod-network.040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" Workload="localhost-k8s-csi--node--driver--cdx7n-eth0" May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.989 [INFO][6108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:21.994205 containerd[1579]: 2025-05-17 00:18:21.991 [INFO][6100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821" May 17 00:18:21.994923 containerd[1579]: time="2025-05-17T00:18:21.994277655Z" level=info msg="TearDown network for sandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\" successfully" May 17 00:18:22.006036 containerd[1579]: time="2025-05-17T00:18:22.006002389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:22.006104 containerd[1579]: time="2025-05-17T00:18:22.006083584Z" level=info msg="RemovePodSandbox \"040b0dd000c5cd7b5e6111a0794a907fc43995ba69dc65a47bc3e4d1f7cd7821\" returns successfully" May 17 00:18:22.006652 containerd[1579]: time="2025-05-17T00:18:22.006610208Z" level=info msg="StopPodSandbox for \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\"" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.040 [WARNING][6126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"deceb09e-4340-4f28-8a23-a33b54df6910", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c", Pod:"calico-apiserver-6fcc4f48fc-xtls5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4881db92099", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.040 [INFO][6126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.040 [INFO][6126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" iface="eth0" netns="" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.040 [INFO][6126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.040 [INFO][6126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.061 [INFO][6135] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.061 [INFO][6135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.061 [INFO][6135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.068 [WARNING][6135] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.068 [INFO][6135] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.070 [INFO][6135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:22.076310 containerd[1579]: 2025-05-17 00:18:22.073 [INFO][6126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.076864 containerd[1579]: time="2025-05-17T00:18:22.076359333Z" level=info msg="TearDown network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\" successfully" May 17 00:18:22.076864 containerd[1579]: time="2025-05-17T00:18:22.076393548Z" level=info msg="StopPodSandbox for \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\" returns successfully" May 17 00:18:22.077060 containerd[1579]: time="2025-05-17T00:18:22.076994905Z" level=info msg="RemovePodSandbox for \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\"" May 17 00:18:22.077060 containerd[1579]: time="2025-05-17T00:18:22.077048708Z" level=info msg="Forcibly stopping sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\"" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.112 [WARNING][6153] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0", GenerateName:"calico-apiserver-6fcc4f48fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"deceb09e-4340-4f28-8a23-a33b54df6910", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc4f48fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b72c78324517c051c55071672ba9f4fdf79f50bcadefe21869d0ab4d7d234c", Pod:"calico-apiserver-6fcc4f48fc-xtls5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4881db92099", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.113 [INFO][6153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.113 [INFO][6153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" iface="eth0" netns="" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.113 [INFO][6153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.113 [INFO][6153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.133 [INFO][6162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.133 [INFO][6162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.133 [INFO][6162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.139 [WARNING][6162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.139 [INFO][6162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" HandleID="k8s-pod-network.4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" Workload="localhost-k8s-calico--apiserver--6fcc4f48fc--xtls5-eth0" May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.140 [INFO][6162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:22.146216 containerd[1579]: 2025-05-17 00:18:22.143 [INFO][6153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64" May 17 00:18:22.146767 containerd[1579]: time="2025-05-17T00:18:22.146268154Z" level=info msg="TearDown network for sandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\" successfully" May 17 00:18:22.151213 containerd[1579]: time="2025-05-17T00:18:22.151152487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:22.151327 containerd[1579]: time="2025-05-17T00:18:22.151235015Z" level=info msg="RemovePodSandbox \"4f4a235d194debb593b49fa928bf92e060e86ef902ba311388a92a7701571f64\" returns successfully" May 17 00:18:22.151742 containerd[1579]: time="2025-05-17T00:18:22.151720090Z" level=info msg="StopPodSandbox for \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\"" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.182 [WARNING][6180] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d70a794c-b705-4096-ab09-a29d9b66f140", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304", Pod:"coredns-7c65d6cfc9-mm6b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0158dd7036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.182 [INFO][6180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.182 [INFO][6180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" iface="eth0" netns="" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.182 [INFO][6180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.182 [INFO][6180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.201 [INFO][6188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.202 [INFO][6188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.202 [INFO][6188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.208 [WARNING][6188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.208 [INFO][6188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.209 [INFO][6188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:22.214410 containerd[1579]: 2025-05-17 00:18:22.211 [INFO][6180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.214962 containerd[1579]: time="2025-05-17T00:18:22.214453407Z" level=info msg="TearDown network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\" successfully" May 17 00:18:22.214962 containerd[1579]: time="2025-05-17T00:18:22.214479657Z" level=info msg="StopPodSandbox for \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\" returns successfully" May 17 00:18:22.214962 containerd[1579]: time="2025-05-17T00:18:22.214937560Z" level=info msg="RemovePodSandbox for \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\"" May 17 00:18:22.215059 containerd[1579]: time="2025-05-17T00:18:22.214962498Z" level=info msg="Forcibly stopping sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\"" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.246 [WARNING][6207] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d70a794c-b705-4096-ab09-a29d9b66f140", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47170dee0b54c354d2d608ba7fa53c80b33cf13948332cc28042412e06235304", Pod:"coredns-7c65d6cfc9-mm6b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0158dd7036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.246 [INFO][6207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.246 [INFO][6207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" iface="eth0" netns="" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.246 [INFO][6207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.246 [INFO][6207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.267 [INFO][6215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.267 [INFO][6215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.267 [INFO][6215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.272 [WARNING][6215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.272 [INFO][6215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" HandleID="k8s-pod-network.5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" Workload="localhost-k8s-coredns--7c65d6cfc9--mm6b4-eth0" May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.273 [INFO][6215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:22.278816 containerd[1579]: 2025-05-17 00:18:22.276 [INFO][6207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2" May 17 00:18:22.279218 containerd[1579]: time="2025-05-17T00:18:22.278867199Z" level=info msg="TearDown network for sandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\" successfully" May 17 00:18:22.282867 containerd[1579]: time="2025-05-17T00:18:22.282841798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:22.282931 containerd[1579]: time="2025-05-17T00:18:22.282893247Z" level=info msg="RemovePodSandbox \"5ee4966ca7abf1f9a7222588026e5c278ac488019007f1e668fb66e6c5086fa2\" returns successfully" May 17 00:18:22.283394 containerd[1579]: time="2025-05-17T00:18:22.283372711Z" level=info msg="StopPodSandbox for \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\"" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.315 [WARNING][6234] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" WorkloadEndpoint="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.315 [INFO][6234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.315 [INFO][6234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" iface="eth0" netns="" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.315 [INFO][6234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.315 [INFO][6234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.333 [INFO][6242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.334 [INFO][6242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.334 [INFO][6242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.340 [WARNING][6242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.340 [INFO][6242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.341 [INFO][6242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:22.346372 containerd[1579]: 2025-05-17 00:18:22.343 [INFO][6234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.346372 containerd[1579]: time="2025-05-17T00:18:22.346343961Z" level=info msg="TearDown network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\" successfully" May 17 00:18:22.346835 containerd[1579]: time="2025-05-17T00:18:22.346379820Z" level=info msg="StopPodSandbox for \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\" returns successfully" May 17 00:18:22.347591 containerd[1579]: time="2025-05-17T00:18:22.347396799Z" level=info msg="RemovePodSandbox for \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\"" May 17 00:18:22.347591 containerd[1579]: time="2025-05-17T00:18:22.347429071Z" level=info msg="Forcibly stopping sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\"" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.377 [WARNING][6260] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" WorkloadEndpoint="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.377 [INFO][6260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.377 [INFO][6260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" iface="eth0" netns="" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.377 [INFO][6260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.377 [INFO][6260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.396 [INFO][6269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.396 [INFO][6269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.396 [INFO][6269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.401 [WARNING][6269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.401 [INFO][6269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" HandleID="k8s-pod-network.3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" Workload="localhost-k8s-whisker--b46bdf5fd--tkbfp-eth0" May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.402 [INFO][6269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:18:22.407767 containerd[1579]: 2025-05-17 00:18:22.405 [INFO][6260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936" May 17 00:18:22.408176 containerd[1579]: time="2025-05-17T00:18:22.407819050Z" level=info msg="TearDown network for sandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\" successfully" May 17 00:18:22.411900 containerd[1579]: time="2025-05-17T00:18:22.411868160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:18:22.411944 containerd[1579]: time="2025-05-17T00:18:22.411922775Z" level=info msg="RemovePodSandbox \"3d7aa026ac0b888e25beddd8d4442ed1736fd58f33ea7ff882af6d7f10f04936\" returns successfully" May 17 00:18:25.101483 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:34964.service - OpenSSH per-connection server daemon (10.0.0.1:34964). May 17 00:18:25.139212 sshd[6297]: Accepted publickey for core from 10.0.0.1 port 34964 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:25.140753 sshd[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:25.144369 systemd-logind[1564]: New session 15 of user core. May 17 00:18:25.153494 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:18:25.269938 sshd[6297]: pam_unix(sshd:session): session closed for user core May 17 00:18:25.278505 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). May 17 00:18:25.279124 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:34964.service: Deactivated successfully. May 17 00:18:25.280926 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:18:25.282623 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. May 17 00:18:25.283512 systemd-logind[1564]: Removed session 15. May 17 00:18:25.312686 sshd[6309]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:25.314065 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:25.317968 systemd-logind[1564]: New session 16 of user core. May 17 00:18:25.329494 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:18:25.677483 sshd[6309]: pam_unix(sshd:session): session closed for user core May 17 00:18:25.684469 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:34986.service - OpenSSH per-connection server daemon (10.0.0.1:34986). May 17 00:18:25.684945 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:34970.service: Deactivated successfully. May 17 00:18:25.688853 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. May 17 00:18:25.689410 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:18:25.690500 systemd-logind[1564]: Removed session 16. May 17 00:18:25.723905 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 34986 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:25.726026 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:25.730339 systemd-logind[1564]: New session 17 of user core. May 17 00:18:25.740506 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:18:27.544044 sshd[6322]: pam_unix(sshd:session): session closed for user core May 17 00:18:27.553903 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:34988.service - OpenSSH per-connection server daemon (10.0.0.1:34988). May 17 00:18:27.555096 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:34986.service: Deactivated successfully. May 17 00:18:27.558831 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. May 17 00:18:27.562591 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:18:27.563582 systemd-logind[1564]: Removed session 17. May 17 00:18:27.597052 sshd[6342]: Accepted publickey for core from 10.0.0.1 port 34988 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:27.598763 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:27.602691 systemd-logind[1564]: New session 18 of user core. May 17 00:18:27.613590 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:18:28.052817 sshd[6342]: pam_unix(sshd:session): session closed for user core May 17 00:18:28.059550 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:33834.service - OpenSSH per-connection server daemon (10.0.0.1:33834). May 17 00:18:28.060062 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:34988.service: Deactivated successfully. May 17 00:18:28.063446 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. May 17 00:18:28.063979 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:18:28.065142 systemd-logind[1564]: Removed session 18. May 17 00:18:28.095383 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 33834 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:28.097101 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:28.101346 systemd-logind[1564]: New session 19 of user core. May 17 00:18:28.108546 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:18:28.213664 sshd[6379]: pam_unix(sshd:session): session closed for user core May 17 00:18:28.217611 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:33834.service: Deactivated successfully. May 17 00:18:28.219796 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. May 17 00:18:28.219798 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:18:28.220896 systemd-logind[1564]: Removed session 19. May 17 00:18:28.932482 kubelet[2724]: E0517 00:18:28.932410 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:18:28.940758 kubelet[2724]: I0517 00:18:28.940695 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cdx7n" podStartSLOduration=41.070174378 podStartE2EDuration="49.940679365s" podCreationTimestamp="2025-05-17 00:17:39 +0000 UTC" firstStartedPulling="2025-05-17 00:18:06.404770188 +0000 UTC m=+45.565540873" lastFinishedPulling="2025-05-17 00:18:15.275275165 +0000 UTC m=+54.436045860" observedRunningTime="2025-05-17 00:18:16.843554054 +0000 UTC m=+56.004324749" watchObservedRunningTime="2025-05-17 00:18:28.940679365 +0000 UTC m=+68.101450060" May 17 00:18:29.932297 kubelet[2724]: E0517 00:18:29.932265 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:30.933848 kubelet[2724]: E0517 00:18:30.933809 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7f7f9c875b-6g4bk" podUID="b3843bf5-7516-4c9f-923b-822352f7eab5" May 17 00:18:33.229472 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:33846.service - OpenSSH per-connection server daemon (10.0.0.1:33846). May 17 00:18:33.262912 sshd[6406]: Accepted publickey for core from 10.0.0.1 port 33846 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:33.264244 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:33.267766 systemd-logind[1564]: New session 20 of user core. May 17 00:18:33.279546 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:18:33.382634 sshd[6406]: pam_unix(sshd:session): session closed for user core May 17 00:18:33.386937 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:33846.service: Deactivated successfully. May 17 00:18:33.389409 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. May 17 00:18:33.389481 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:18:33.390762 systemd-logind[1564]: Removed session 20. May 17 00:18:36.931482 kubelet[2724]: E0517 00:18:36.931447 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:18:38.392602 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:48656.service - OpenSSH per-connection server daemon (10.0.0.1:48656). May 17 00:18:38.432563 sshd[6425]: Accepted publickey for core from 10.0.0.1 port 48656 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:38.434604 sshd[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:38.438887 systemd-logind[1564]: New session 21 of user core. May 17 00:18:38.447769 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:18:38.599295 sshd[6425]: pam_unix(sshd:session): session closed for user core May 17 00:18:38.603041 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:48656.service: Deactivated successfully. May 17 00:18:38.605533 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. May 17 00:18:38.605658 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:18:38.606960 systemd-logind[1564]: Removed session 21. May 17 00:18:42.933218 containerd[1579]: time="2025-05-17T00:18:42.932966945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:18:43.174462 containerd[1579]: time="2025-05-17T00:18:43.174399898Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:43.175624 containerd[1579]: time="2025-05-17T00:18:43.175590795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:43.175703 containerd[1579]: time="2025-05-17T00:18:43.175668702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:18:43.175834 kubelet[2724]: E0517 00:18:43.175793 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:18:43.176318 kubelet[2724]: E0517 00:18:43.175847 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:18:43.176318 kubelet[2724]: E0517 00:18:43.175974 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n49l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-82l2f_calico-system(1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:43.177150 kubelet[2724]: E0517 00:18:43.177125 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-82l2f" podUID="1a7bc7b9-b4ab-41b2-8768-f5e1f19adf64" May 17 00:18:43.607629 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:48670.service - OpenSSH per-connection server daemon (10.0.0.1:48670). May 17 00:18:43.649034 sshd[6444]: Accepted publickey for core from 10.0.0.1 port 48670 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:18:43.651112 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:43.655813 systemd-logind[1564]: New session 22 of user core. May 17 00:18:43.661512 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:18:43.814488 sshd[6444]: pam_unix(sshd:session): session closed for user core May 17 00:18:43.818689 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:48670.service: Deactivated successfully. May 17 00:18:43.820816 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. May 17 00:18:43.820869 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:18:43.822027 systemd-logind[1564]: Removed session 22. May 17 00:18:44.932349 containerd[1579]: time="2025-05-17T00:18:44.932300805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:18:45.181428 containerd[1579]: time="2025-05-17T00:18:45.181378858Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:45.220299 containerd[1579]: time="2025-05-17T00:18:45.220123134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:18:45.220299 containerd[1579]: time="2025-05-17T00:18:45.220146689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:45.220458 kubelet[2724]: E0517 00:18:45.220382 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:18:45.220458 kubelet[2724]: E0517 00:18:45.220432 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:18:45.220917 kubelet[2724]: E0517 00:18:45.220531 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5fa0e8b210c943fe9a524550ec7c8a90,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plrtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f7f9c875b-6g4bk_calico-system(b3843bf5-7516-4c9f-923b-822352f7eab5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:45.222444 containerd[1579]: time="2025-05-17T00:18:45.222406268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:18:45.509389 containerd[1579]: time="2025-05-17T00:18:45.509133295Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:18:45.510606 containerd[1579]: time="2025-05-17T00:18:45.510541642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:18:45.510694 containerd[1579]: time="2025-05-17T00:18:45.510618217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:18:45.510908 kubelet[2724]: E0517 00:18:45.510829 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:18:45.510908 kubelet[2724]: E0517 00:18:45.510898 2724 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:18:45.511088 kubelet[2724]: E0517 00:18:45.511024 2724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plrtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f7f9c875b-6g4bk_calico-system(b3843bf5-7516-4c9f-923b-822352f7eab5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:18:45.512278 kubelet[2724]: E0517 00:18:45.512194 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f7f9c875b-6g4bk" podUID="b3843bf5-7516-4c9f-923b-822352f7eab5"