May 17 00:15:29.873328 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:15:29.873348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:15:29.873359 kernel: BIOS-provided physical RAM map: May 17 00:15:29.873365 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:15:29.873371 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:15:29.873377 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:15:29.873384 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:15:29.873391 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:15:29.873397 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:15:29.873405 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:15:29.873412 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:15:29.873418 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:15:29.873424 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:15:29.873430 kernel: NX (Execute Disable) protection: active May 17 00:15:29.873438 kernel: APIC: Static calls initialized May 17 00:15:29.873447 kernel: SMBIOS 2.8 present. May 17 00:15:29.873454 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:15:29.873460 kernel: Hypervisor detected: KVM May 17 00:15:29.873467 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:15:29.873474 kernel: kvm-clock: using sched offset of 2233071102 cycles May 17 00:15:29.873481 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:15:29.873488 kernel: tsc: Detected 2794.748 MHz processor May 17 00:15:29.873495 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:15:29.873502 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:15:29.873509 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:15:29.873518 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:15:29.873525 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:15:29.873532 kernel: Using GB pages for direct mapping May 17 00:15:29.873539 kernel: ACPI: Early table checksum verification disabled May 17 00:15:29.873546 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:15:29.873553 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:15:29.873560 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:15:29.873567 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:15:29.873576 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:15:29.873583 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:15:29.873590 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:15:29.873597 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:15:29.873614 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:15:29.873622 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:15:29.873629 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:15:29.873639 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:15:29.873649 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:15:29.873656 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:15:29.873663 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:15:29.873670 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:15:29.873677 kernel: No NUMA configuration found May 17 00:15:29.873685 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:15:29.873694 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:15:29.873701 kernel: Zone ranges: May 17 00:15:29.873708 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:15:29.873715 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:15:29.873722 kernel: Normal empty May 17 00:15:29.873729 kernel: Movable zone start for each node May 17 00:15:29.873736 kernel: Early memory node ranges May 17 00:15:29.873743 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:15:29.873751 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:15:29.873758 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:15:29.873767 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:15:29.873774 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:15:29.873781 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:15:29.873789 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:15:29.873796 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:15:29.873803 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:15:29.873810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:15:29.873817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:15:29.873824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:15:29.873834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:15:29.873841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:15:29.873848 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:15:29.873856 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:15:29.873863 kernel: TSC deadline timer available May 17 00:15:29.873870 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:15:29.873877 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:15:29.873884 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:15:29.873891 kernel: kvm-guest: setup PV sched yield May 17 00:15:29.873901 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:15:29.873908 kernel: Booting paravirtualized kernel on KVM May 17 00:15:29.873915 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:15:29.873923 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 17 00:15:29.873930 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 17 00:15:29.873938 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 17 00:15:29.873945 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:15:29.873952 kernel: kvm-guest: PV spinlocks enabled May 17 00:15:29.873959 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:15:29.873969 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:15:29.873977 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:15:29.873984 kernel: random: crng init done May 17 00:15:29.873991 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:15:29.873999 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:15:29.874006 kernel: Fallback order for Node 0: 0 May 17 00:15:29.874013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:15:29.874020 kernel: Policy zone: DMA32 May 17 00:15:29.874027 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:15:29.874037 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 136900K reserved, 0K cma-reserved) May 17 00:15:29.874044 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:15:29.874051 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:15:29.874059 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:15:29.874066 kernel: Dynamic Preempt: voluntary May 17 00:15:29.874073 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:15:29.874081 kernel: rcu: RCU event tracing is enabled. May 17 00:15:29.874088 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:15:29.874095 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:15:29.874105 kernel: Rude variant of Tasks RCU enabled. May 17 00:15:29.874112 kernel: Tracing variant of Tasks RCU enabled. May 17 00:15:29.874119 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:15:29.874126 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:15:29.874134 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:15:29.874141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:15:29.874148 kernel: Console: colour VGA+ 80x25 May 17 00:15:29.874155 kernel: printk: console [ttyS0] enabled May 17 00:15:29.874162 kernel: ACPI: Core revision 20230628 May 17 00:15:29.874172 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:15:29.874179 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:15:29.874186 kernel: x2apic enabled May 17 00:15:29.874193 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:15:29.874209 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:15:29.874216 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:15:29.874223 kernel: kvm-guest: setup PV IPIs May 17 00:15:29.874240 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:15:29.874248 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:15:29.874255 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:15:29.874263 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:15:29.874270 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:15:29.874280 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:15:29.874287 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:15:29.874295 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:15:29.874303 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:15:29.874313 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:15:29.874320 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:15:29.874328 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:15:29.874336 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:15:29.874344 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:15:29.874354 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:15:29.874362 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:15:29.874372 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:15:29.874380 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:15:29.874389 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:15:29.874397 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:15:29.874405 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:15:29.874412 kernel: Freeing SMP alternatives memory: 32K May 17 00:15:29.874420 kernel: pid_max: default: 32768 minimum: 301 May 17 00:15:29.874427 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:15:29.874435 kernel: landlock: Up and running. May 17 00:15:29.874442 kernel: SELinux: Initializing. May 17 00:15:29.874450 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:15:29.874460 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:15:29.874468 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:15:29.874476 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:15:29.874483 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:15:29.874491 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:15:29.874499 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:15:29.874506 kernel: ... version: 0 May 17 00:15:29.874514 kernel: ... bit width: 48 May 17 00:15:29.874523 kernel: ... generic registers: 6 May 17 00:15:29.874531 kernel: ... value mask: 0000ffffffffffff May 17 00:15:29.874539 kernel: ... max period: 00007fffffffffff May 17 00:15:29.874546 kernel: ... fixed-purpose events: 0 May 17 00:15:29.874554 kernel: ... event mask: 000000000000003f May 17 00:15:29.874561 kernel: signal: max sigframe size: 1776 May 17 00:15:29.874569 kernel: rcu: Hierarchical SRCU implementation. May 17 00:15:29.874576 kernel: rcu: Max phase no-delay instances is 400. May 17 00:15:29.874584 kernel: smp: Bringing up secondary CPUs ... May 17 00:15:29.874591 kernel: smpboot: x86: Booting SMP configuration: May 17 00:15:29.874601 kernel: .... node #0, CPUs: #1 #2 #3 May 17 00:15:29.874619 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:15:29.874626 kernel: smpboot: Max logical packages: 1 May 17 00:15:29.874634 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:15:29.874642 kernel: devtmpfs: initialized May 17 00:15:29.874649 kernel: x86/mm: Memory block size: 128MB May 17 00:15:29.874657 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:15:29.874664 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:15:29.874672 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:15:29.874682 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:15:29.874690 kernel: audit: initializing netlink subsys (disabled) May 17 00:15:29.874697 kernel: audit: type=2000 audit(1747440930.170:1): state=initialized audit_enabled=0 res=1 May 17 00:15:29.874705 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:15:29.874712 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:15:29.874720 kernel: cpuidle: using governor menu May 17 00:15:29.874727 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:15:29.874735 kernel: dca service started, version 1.12.1 May 17 00:15:29.874742 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:15:29.874752 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:15:29.874760 kernel: PCI: Using configuration type 1 for base access May 17 00:15:29.874768 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:15:29.874775 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:15:29.874783 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:15:29.874791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:15:29.874798 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:15:29.874806 kernel: ACPI: Added _OSI(Module Device) May 17 00:15:29.874813 kernel: ACPI: Added _OSI(Processor Device) May 17 00:15:29.874823 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:15:29.874831 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:15:29.874838 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:15:29.874846 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:15:29.874853 kernel: ACPI: Interpreter enabled May 17 00:15:29.874861 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:15:29.874868 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:15:29.874876 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:15:29.874883 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:15:29.874894 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:15:29.874901 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:15:29.875081 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:15:29.875221 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:15:29.875345 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:15:29.875355 kernel: PCI host bridge to bus 0000:00 May 17 00:15:29.875478 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:15:29.875652 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:15:29.875821 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:15:29.875931 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:15:29.876039 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:15:29.876149 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:15:29.876268 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:15:29.876404 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:15:29.876541 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:15:29.876678 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:15:29.876802 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:15:29.876922 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:15:29.877056 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:15:29.877258 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:15:29.877444 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:15:29.877587 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:15:29.877725 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:15:29.877861 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:15:29.877982 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:15:29.878101 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:15:29.878231 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:15:29.878367 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:15:29.878488 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:15:29.878627 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:15:29.878787 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:15:29.878913 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:15:29.879044 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:15:29.879165 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:15:29.879310 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:15:29.879438 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:15:29.879558 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:15:29.879740 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:15:29.879863 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:15:29.879874 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:15:29.879881 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:15:29.879893 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:15:29.879901 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:15:29.879908 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:15:29.879916 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:15:29.879923 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:15:29.879931 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:15:29.879938 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:15:29.879946 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:15:29.879953 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:15:29.879963 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:15:29.879970 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:15:29.879978 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:15:29.879985 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:15:29.879993 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:15:29.880000 kernel: iommu: Default domain type: Translated May 17 00:15:29.880007 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:15:29.880015 kernel: PCI: Using ACPI for IRQ routing May 17 00:15:29.880022 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:15:29.880032 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:15:29.880040 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:15:29.880160 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:15:29.880294 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:15:29.880416 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:15:29.880426 kernel: vgaarb: loaded May 17 00:15:29.880434 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:15:29.880442 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:15:29.880453 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:15:29.880461 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:15:29.880469 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:15:29.880476 kernel: pnp: PnP ACPI init May 17 00:15:29.880617 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:15:29.880630 kernel: pnp: PnP ACPI: found 6 devices May 17 00:15:29.880637 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:15:29.880645 kernel: NET: Registered PF_INET protocol family May 17 00:15:29.880656 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:15:29.880664 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:15:29.880672 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:15:29.880680 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:15:29.880688 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:15:29.880695 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:15:29.880703 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:15:29.880711 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:15:29.880718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:15:29.880728 kernel: NET: Registered PF_XDP protocol family May 17 00:15:29.880841 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:15:29.880951 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:15:29.881062 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:15:29.881172 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:15:29.881295 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:15:29.881406 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:15:29.881416 kernel: PCI: CLS 0 bytes, default 64 May 17 00:15:29.881428 kernel: Initialise system trusted keyrings May 17 00:15:29.881435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:15:29.881443 kernel: Key type asymmetric registered May 17 00:15:29.881451 kernel: Asymmetric key parser 'x509' registered May 17 00:15:29.881458 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:15:29.881466 kernel: io scheduler mq-deadline registered May 17 00:15:29.881474 kernel: io scheduler kyber registered May 17 00:15:29.881481 kernel: io scheduler bfq registered May 17 00:15:29.881489 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:15:29.881500 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:15:29.881507 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:15:29.881515 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:15:29.881523 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:15:29.881530 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:15:29.881538 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:15:29.881546 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:15:29.881554 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:15:29.881725 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:15:29.881741 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:15:29.881855 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:15:29.881966 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:15:29 UTC (1747440929) May 17 00:15:29.882078 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:15:29.882088 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:15:29.882095 kernel: NET: Registered PF_INET6 protocol family May 17 00:15:29.882103 kernel: Segment Routing with IPv6 May 17 00:15:29.882111 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:15:29.882122 kernel: NET: Registered PF_PACKET protocol family May 17 00:15:29.882130 kernel: Key type dns_resolver registered May 17 00:15:29.882137 kernel: IPI shorthand broadcast: enabled May 17 00:15:29.882145 kernel: sched_clock: Marking stable (604002082, 106392226)->(728606317, -18212009) May 17 00:15:29.882152 kernel: registered taskstats version 1 May 17 00:15:29.882160 kernel: Loading compiled-in X.509 certificates May 17 00:15:29.882168 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:15:29.882176 kernel: Key type .fscrypt registered May 17 00:15:29.882183 kernel: Key type fscrypt-provisioning registered May 17 00:15:29.882193 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:15:29.882209 kernel: ima: Allocated hash algorithm: sha1 May 17 00:15:29.882217 kernel: ima: No architecture policies found May 17 00:15:29.882225 kernel: clk: Disabling unused clocks May 17 00:15:29.882232 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:15:29.882240 kernel: Write protecting the kernel read-only data: 36864k May 17 00:15:29.882248 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:15:29.882255 kernel: Run /init as init process May 17 00:15:29.882263 kernel: with arguments: May 17 00:15:29.882272 kernel: /init May 17 00:15:29.882280 kernel: with environment: May 17 00:15:29.882287 kernel: HOME=/ May 17 00:15:29.882295 kernel: TERM=linux May 17 00:15:29.882302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:15:29.882312 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:15:29.882322 systemd[1]: Detected virtualization kvm. May 17 00:15:29.882330 systemd[1]: Detected architecture x86-64. May 17 00:15:29.882340 systemd[1]: Running in initrd. May 17 00:15:29.882348 systemd[1]: No hostname configured, using default hostname. May 17 00:15:29.882356 systemd[1]: Hostname set to . May 17 00:15:29.882365 systemd[1]: Initializing machine ID from VM UUID. May 17 00:15:29.882373 systemd[1]: Queued start job for default target initrd.target. May 17 00:15:29.882381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:15:29.882390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:15:29.882398 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:15:29.882409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:15:29.882429 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:15:29.882440 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:15:29.882450 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:15:29.882461 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:15:29.882469 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:15:29.882478 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:15:29.882486 systemd[1]: Reached target paths.target - Path Units. May 17 00:15:29.882495 systemd[1]: Reached target slices.target - Slice Units. May 17 00:15:29.882503 systemd[1]: Reached target swap.target - Swaps. May 17 00:15:29.882511 systemd[1]: Reached target timers.target - Timer Units. May 17 00:15:29.882520 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:15:29.882528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:15:29.882540 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:15:29.882548 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:15:29.882557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:15:29.882565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:15:29.882574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:15:29.882582 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:15:29.882593 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:15:29.882601 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:15:29.882630 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:15:29.882639 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:15:29.882647 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:15:29.882655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:15:29.882663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:15:29.882672 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:15:29.882680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:15:29.882688 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:15:29.882716 systemd-journald[193]: Collecting audit messages is disabled. May 17 00:15:29.882738 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:15:29.882747 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:15:29.882758 systemd-journald[193]: Journal started May 17 00:15:29.882777 systemd-journald[193]: Runtime Journal (/run/log/journal/6a903bfd062541c1952d5aea2e3dad08) is 6.0M, max 48.4M, 42.3M free. May 17 00:15:29.877538 systemd-modules-load[194]: Inserted module 'overlay' May 17 00:15:29.918380 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:15:29.918399 kernel: Bridge firewalling registered May 17 00:15:29.904956 systemd-modules-load[194]: Inserted module 'br_netfilter' May 17 00:15:29.921066 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:15:29.921461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:15:29.923745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:15:29.942736 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:15:29.956783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:15:29.957860 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:15:29.961235 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:15:29.971839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:15:29.972725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:15:29.974751 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:15:29.980845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:15:29.985125 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:15:29.988415 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:15:30.010422 systemd-resolved[223]: Positive Trust Anchors: May 17 00:15:30.010436 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:15:30.010467 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:15:30.013070 systemd-resolved[223]: Defaulting to hostname 'linux'. May 17 00:15:30.021963 dracut-cmdline[229]: dracut-dracut-053 May 17 00:15:30.021963 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:15:30.014062 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:15:30.019945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:15:30.099645 kernel: SCSI subsystem initialized May 17 00:15:30.108629 kernel: Loading iSCSI transport class v2.0-870. May 17 00:15:30.119632 kernel: iscsi: registered transport (tcp) May 17 00:15:30.139624 kernel: iscsi: registered transport (qla4xxx) May 17 00:15:30.139648 kernel: QLogic iSCSI HBA Driver May 17 00:15:30.186319 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:15:30.192789 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:15:30.217262 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:15:30.217307 kernel: device-mapper: uevent: version 1.0.3 May 17 00:15:30.217329 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:15:30.257633 kernel: raid6: avx2x4 gen() 30574 MB/s May 17 00:15:30.274631 kernel: raid6: avx2x2 gen() 31034 MB/s May 17 00:15:30.291712 kernel: raid6: avx2x1 gen() 25781 MB/s May 17 00:15:30.291731 kernel: raid6: using algorithm avx2x2 gen() 31034 MB/s May 17 00:15:30.309716 kernel: raid6: .... xor() 19935 MB/s, rmw enabled May 17 00:15:30.309738 kernel: raid6: using avx2x2 recovery algorithm May 17 00:15:30.329629 kernel: xor: automatically using best checksumming function avx May 17 00:15:30.480641 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:15:30.491706 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:15:30.502755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:15:30.514234 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 17 00:15:30.518743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:15:30.519869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:15:30.540681 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation May 17 00:15:30.572705 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:15:30.583826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:15:30.645222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:15:30.656790 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:15:30.669150 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:15:30.671243 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:15:30.675118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:15:30.676398 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:15:30.686630 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 17 00:15:30.687791 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:15:30.687901 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:15:30.693327 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:15:30.701889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:15:30.705886 kernel: libata version 3.00 loaded. May 17 00:15:30.705902 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:15:30.705913 kernel: GPT:9289727 != 19775487 May 17 00:15:30.705923 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:15:30.705934 kernel: GPT:9289727 != 19775487 May 17 00:15:30.705943 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:15:30.705953 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:15:30.706632 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:15:30.711256 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:15:30.711313 kernel: AES CTR mode by8 optimization enabled May 17 00:15:30.711602 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:15:30.713028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:15:30.722051 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:15:30.722302 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:15:30.722315 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:15:30.722860 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:15:30.713091 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:15:30.714490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:15:30.727083 kernel: scsi host0: ahci May 17 00:15:30.727289 kernel: scsi host1: ahci May 17 00:15:30.725733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:15:30.729563 kernel: scsi host2: ahci May 17 00:15:30.729777 kernel: scsi host3: ahci May 17 00:15:30.730120 kernel: scsi host4: ahci May 17 00:15:30.729300 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:15:30.741209 kernel: scsi host5: ahci May 17 00:15:30.741423 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 17 00:15:30.741436 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 17 00:15:30.741446 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 17 00:15:30.741468 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 17 00:15:30.741479 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 17 00:15:30.741488 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 17 00:15:30.751166 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (463) May 17 00:15:30.751226 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) May 17 00:15:30.764172 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:15:30.783782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:15:30.784273 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:15:30.790892 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:15:30.791160 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:15:30.798342 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:15:30.809811 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:15:30.813002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:15:30.819879 disk-uuid[552]: Primary Header is updated. May 17 00:15:30.819879 disk-uuid[552]: Secondary Entries is updated. May 17 00:15:30.819879 disk-uuid[552]: Secondary Header is updated. May 17 00:15:30.823627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:15:30.828644 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:15:30.841370 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:15:31.044643 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:15:31.053628 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:15:31.053657 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:15:31.053669 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:15:31.054634 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:15:31.054649 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:15:31.055637 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:15:31.056795 kernel: ata3.00: applying bridge limits May 17 00:15:31.056812 kernel: ata3.00: configured for UDMA/100 May 17 00:15:31.057637 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:15:31.112635 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:15:31.112898 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:15:31.126631 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:15:31.829639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:15:31.829698 disk-uuid[554]: The operation has completed successfully. May 17 00:15:31.857975 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:15:31.858138 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:15:31.890815 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:15:31.896101 sh[590]: Success May 17 00:15:31.909627 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:15:31.943409 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:15:31.957018 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:15:31.960399 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:15:31.970894 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:15:31.970940 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:15:31.970952 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:15:31.971921 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:15:31.972673 kernel: BTRFS info (device dm-0): using free space tree May 17 00:15:31.977595 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:15:31.979834 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:15:31.993772 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:15:31.995443 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:15:32.004111 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:15:32.004139 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:15:32.004150 kernel: BTRFS info (device vda6): using free space tree May 17 00:15:32.006644 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:15:32.015525 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:15:32.017324 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:15:32.025808 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:15:32.031838 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:15:32.085309 ignition[680]: Ignition 2.19.0 May 17 00:15:32.085323 ignition[680]: Stage: fetch-offline May 17 00:15:32.085360 ignition[680]: no configs at "/usr/lib/ignition/base.d" May 17 00:15:32.085373 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:15:32.085487 ignition[680]: parsed url from cmdline: "" May 17 00:15:32.085493 ignition[680]: no config URL provided May 17 00:15:32.085500 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:15:32.085511 ignition[680]: no config at "/usr/lib/ignition/user.ign" May 17 00:15:32.085550 ignition[680]: op(1): [started] loading QEMU firmware config module May 17 00:15:32.085557 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:15:32.097144 ignition[680]: op(1): [finished] loading QEMU firmware config module May 17 00:15:32.097186 ignition[680]: QEMU firmware config was not found. Ignoring... May 17 00:15:32.123016 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:15:32.133741 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:15:32.147070 ignition[680]: parsing config with SHA512: cb87be71ddcf4c9155412ed1d86c1246dfe58b0b6a78e9ca72b9e157c9e982423c258c95a18094b93c6ae514ac02d076961a6fddb511d8f4c217f99f4e15c804 May 17 00:15:32.151542 unknown[680]: fetched base config from "system" May 17 00:15:32.151560 unknown[680]: fetched user config from "qemu" May 17 00:15:32.152005 ignition[680]: fetch-offline: fetch-offline passed May 17 00:15:32.152074 ignition[680]: Ignition finished successfully May 17 00:15:32.154163 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:15:32.160205 systemd-networkd[779]: lo: Link UP May 17 00:15:32.160216 systemd-networkd[779]: lo: Gained carrier May 17 00:15:32.161824 systemd-networkd[779]: Enumeration completed May 17 00:15:32.161918 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:15:32.162266 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:15:32.162271 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:15:32.163650 systemd-networkd[779]: eth0: Link UP May 17 00:15:32.163654 systemd-networkd[779]: eth0: Gained carrier May 17 00:15:32.163662 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:15:32.164278 systemd[1]: Reached target network.target - Network. May 17 00:15:32.166173 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:15:32.174745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:15:32.179657 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:15:32.188954 ignition[782]: Ignition 2.19.0 May 17 00:15:32.188965 ignition[782]: Stage: kargs May 17 00:15:32.189116 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 17 00:15:32.189128 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:15:32.189937 ignition[782]: kargs: kargs passed May 17 00:15:32.189974 ignition[782]: Ignition finished successfully May 17 00:15:32.193226 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:15:32.205751 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:15:32.218113 ignition[791]: Ignition 2.19.0 May 17 00:15:32.218124 ignition[791]: Stage: disks May 17 00:15:32.218321 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 17 00:15:32.218332 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:15:32.219111 ignition[791]: disks: disks passed May 17 00:15:32.221816 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:15:32.219161 ignition[791]: Ignition finished successfully May 17 00:15:32.223501 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:15:32.225439 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:15:32.225862 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:15:32.226202 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:15:32.226375 systemd[1]: Reached target basic.target - Basic System. May 17 00:15:32.236779 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:15:32.248813 systemd-resolved[223]: Detected conflict on linux IN A 10.0.0.66 May 17 00:15:32.248829 systemd-resolved[223]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. May 17 00:15:32.252850 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:15:32.259767 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:15:32.273761 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:15:32.358633 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:15:32.358930 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:15:32.360506 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:15:32.370687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:15:32.372596 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:15:32.373358 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:15:32.373397 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:15:32.386317 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) May 17 00:15:32.386342 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:15:32.386354 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:15:32.386365 kernel: BTRFS info (device vda6): using free space tree May 17 00:15:32.386376 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:15:32.373420 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:15:32.381270 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:15:32.387368 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:15:32.390280 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:15:32.425250 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:15:32.429690 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory May 17 00:15:32.434282 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:15:32.438842 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:15:32.517801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:15:32.526747 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:15:32.528463 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:15:32.534675 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:15:32.551888 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:15:32.556437 ignition[922]: INFO : Ignition 2.19.0 May 17 00:15:32.556437 ignition[922]: INFO : Stage: mount May 17 00:15:32.558220 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:15:32.558220 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:15:32.558220 ignition[922]: INFO : mount: mount passed May 17 00:15:32.558220 ignition[922]: INFO : Ignition finished successfully May 17 00:15:32.559844 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:15:32.576702 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:15:32.970527 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:15:32.979839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:15:32.986638 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (936) May 17 00:15:32.988632 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:15:32.988655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:15:32.988666 kernel: BTRFS info (device vda6): using free space tree May 17 00:15:32.991639 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:15:32.993069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:15:33.014273 ignition[953]: INFO : Ignition 2.19.0 May 17 00:15:33.014273 ignition[953]: INFO : Stage: files May 17 00:15:33.016538 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:15:33.016538 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:15:33.016538 ignition[953]: DEBUG : files: compiled without relabeling support, skipping May 17 00:15:33.016538 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:15:33.016538 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:15:33.023707 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:15:33.023707 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:15:33.023707 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:15:33.023707 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:15:33.023707 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:15:33.019726 unknown[953]: wrote ssh authorized keys file for user: core May 17 00:15:33.080059 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:15:33.232494 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:15:33.234626 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:15:34.029756 systemd-networkd[779]: eth0: Gained IPv6LL May 17 00:15:34.222053 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:15:34.600662 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:15:34.600662 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:15:34.604703 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:15:34.627376 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:15:34.632287 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:15:34.633948 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:15:34.633948 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 17 00:15:34.633948 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:15:34.633948 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:15:34.633948 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:15:34.633948 ignition[953]: INFO : files: files passed May 17 00:15:34.633948 ignition[953]: INFO : Ignition finished successfully May 17 00:15:34.635852 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:15:34.653825 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:15:34.656879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:15:34.658959 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:15:34.659108 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:15:34.666850 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory May 17 00:15:34.669852 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:15:34.669852 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:15:34.673207 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:15:34.676666 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:15:34.679314 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:15:34.692805 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:15:34.720205 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:15:34.721355 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:15:34.724043 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:15:34.726053 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:15:34.728073 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:15:34.741805 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:15:34.757380 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:15:34.761260 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:15:34.776411 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:15:34.778849 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:15:34.781235 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:15:34.783054 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:15:34.784105 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:15:34.786658 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:15:34.788927 systemd[1]: Stopped target basic.target - Basic System. May 17 00:15:34.790881 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:15:34.793072 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:15:34.795402 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:15:34.797635 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:15:34.799750 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:15:34.802258 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:15:34.804433 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:15:34.806453 systemd[1]: Stopped target swap.target - Swaps. May 17 00:15:34.808067 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:15:34.809091 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:15:34.811363 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:15:34.813534 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:15:34.815890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:15:34.816860 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:15:34.819427 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:15:34.820428 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:15:34.822653 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:15:34.823747 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:15:34.826106 systemd[1]: Stopped target paths.target - Path Units. May 17 00:15:34.827879 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:15:34.828078 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:15:34.828629 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:15:34.828916 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:15:34.833432 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:15:34.833539 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:15:34.835153 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:15:34.835240 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:15:34.837174 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:15:34.837294 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:15:34.838979 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:15:34.839085 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:15:34.853835 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:15:34.854318 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:15:34.854447 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:15:34.856939 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:15:34.858172 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:15:34.858336 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:15:34.863733 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:15:34.864746 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:15:34.870301 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:15:34.871344 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:15:34.873782 ignition[1007]: INFO : Ignition 2.19.0 May 17 00:15:34.873782 ignition[1007]: INFO : Stage: umount May 17 00:15:34.873782 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:15:34.873782 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:15:34.873782 ignition[1007]: INFO : umount: umount passed May 17 00:15:34.873782 ignition[1007]: INFO : Ignition finished successfully May 17 00:15:34.875274 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:15:34.875383 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:15:34.876375 systemd[1]: Stopped target network.target - Network. May 17 00:15:34.876931 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:15:34.876991 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:15:34.877639 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:15:34.877686 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:15:34.877953 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:15:34.877995 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:15:34.878289 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:15:34.878332 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:15:34.878761 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:15:34.879181 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:15:34.883087 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:15:34.889008 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:15:34.889139 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:15:34.891829 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:15:34.891883 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:15:34.892665 systemd-networkd[779]: eth0: DHCPv6 lease lost May 17 00:15:34.894891 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:15:34.895019 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:15:34.896413 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:15:34.896453 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:15:34.904696 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:15:34.906010 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:15:34.906063 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:15:34.908240 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:15:34.908286 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:15:34.910338 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:15:34.910383 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:15:34.912667 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:15:34.922823 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:15:34.922963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:15:34.930381 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:15:34.930561 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:15:34.932450 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:15:34.932497 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:15:34.934220 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:15:34.934261 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:15:34.936089 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:15:34.936143 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:15:34.938543 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:15:34.938589 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:15:34.940296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:15:34.940342 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:15:34.951767 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:15:34.952869 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:15:34.954028 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:15:34.956298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:15:34.957535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:15:34.960921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:15:34.962055 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:15:35.043727 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:15:35.043849 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:15:35.044659 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:15:35.044964 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:15:35.045012 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:15:35.058784 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:15:35.065929 systemd[1]: Switching root. May 17 00:15:35.093788 systemd-journald[193]: Journal stopped May 17 00:15:36.141640 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 17 00:15:36.141719 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:15:36.141743 kernel: SELinux: policy capability open_perms=1 May 17 00:15:36.141758 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:15:36.141785 kernel: SELinux: policy capability always_check_network=0 May 17 00:15:36.141801 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:15:36.141816 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:15:36.141832 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:15:36.141848 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:15:36.141863 kernel: audit: type=1403 audit(1747440935.428:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:15:36.141880 systemd[1]: Successfully loaded SELinux policy in 41.726ms. May 17 00:15:36.141906 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.889ms. May 17 00:15:36.141927 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:15:36.141947 systemd[1]: Detected virtualization kvm. May 17 00:15:36.141965 systemd[1]: Detected architecture x86-64. May 17 00:15:36.141982 systemd[1]: Detected first boot. May 17 00:15:36.141999 systemd[1]: Initializing machine ID from VM UUID. May 17 00:15:36.142016 zram_generator::config[1052]: No configuration found. May 17 00:15:36.142040 systemd[1]: Populated /etc with preset unit settings. May 17 00:15:36.142058 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:15:36.142084 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:15:36.142105 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:15:36.142123 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:15:36.142146 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:15:36.142163 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:15:36.142180 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:15:36.142198 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:15:36.142215 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:15:36.142232 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:15:36.142249 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:15:36.142269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:15:36.142287 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:15:36.142303 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:15:36.142328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:15:36.142346 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:15:36.142363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:15:36.142380 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:15:36.142397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:15:36.142414 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:15:36.142434 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:15:36.142451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:15:36.142468 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:15:36.142485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:15:36.142502 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:15:36.142519 systemd[1]: Reached target slices.target - Slice Units. May 17 00:15:36.142535 systemd[1]: Reached target swap.target - Swaps. May 17 00:15:36.142555 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:15:36.142572 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:15:36.142587 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:15:36.142602 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:15:36.142629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:15:36.142645 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:15:36.142660 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:15:36.142686 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:15:36.142722 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:15:36.142760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:36.142789 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:15:36.142819 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:15:36.142847 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:15:36.142877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:15:36.142905 systemd[1]: Reached target machines.target - Containers. May 17 00:15:36.142933 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:15:36.142962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:15:36.142991 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:15:36.143025 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:15:36.143042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:15:36.143057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:15:36.143082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:15:36.143097 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:15:36.143113 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:15:36.143128 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:15:36.143145 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:15:36.143167 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:15:36.143183 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:15:36.143198 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:15:36.143215 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:15:36.143230 kernel: loop: module loaded May 17 00:15:36.143245 kernel: fuse: init (API version 7.39) May 17 00:15:36.143260 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:15:36.143275 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:15:36.143290 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:15:36.143309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:15:36.143324 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:15:36.143340 systemd[1]: Stopped verity-setup.service. May 17 00:15:36.143355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:36.143371 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:15:36.143407 systemd-journald[1122]: Collecting audit messages is disabled. May 17 00:15:36.143434 systemd-journald[1122]: Journal started May 17 00:15:36.143465 systemd-journald[1122]: Runtime Journal (/run/log/journal/6a903bfd062541c1952d5aea2e3dad08) is 6.0M, max 48.4M, 42.3M free. May 17 00:15:35.925345 systemd[1]: Queued start job for default target multi-user.target. May 17 00:15:36.146156 kernel: ACPI: bus type drm_connector registered May 17 00:15:36.146174 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:15:35.940963 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:15:35.941399 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:15:36.148601 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:15:36.149881 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:15:36.150990 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:15:36.152298 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:15:36.153528 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:15:36.154817 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:15:36.156284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:15:36.157835 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:15:36.158003 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:15:36.159503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:15:36.159682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:15:36.161252 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:15:36.161417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:15:36.162793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:15:36.162955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:15:36.164455 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:15:36.164631 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:15:36.165997 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:15:36.166168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:15:36.167683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:15:36.169057 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:15:36.170587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:15:36.183283 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:15:36.190681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:15:36.192926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:15:36.194055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:15:36.194088 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:15:36.196031 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:15:36.198334 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:15:36.200433 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:15:36.201567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:15:36.204143 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:15:36.206859 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:15:36.208716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:15:36.212192 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:15:36.213400 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:15:36.216767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:15:36.221933 systemd-journald[1122]: Time spent on flushing to /var/log/journal/6a903bfd062541c1952d5aea2e3dad08 is 19.461ms for 950 entries. May 17 00:15:36.221933 systemd-journald[1122]: System Journal (/var/log/journal/6a903bfd062541c1952d5aea2e3dad08) is 8.0M, max 195.6M, 187.6M free. May 17 00:15:36.260167 systemd-journald[1122]: Received client request to flush runtime journal. May 17 00:15:36.260201 kernel: loop0: detected capacity change from 0 to 140768 May 17 00:15:36.219834 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:15:36.223449 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:15:36.226603 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:15:36.228157 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:15:36.230772 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:15:36.239966 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:15:36.241527 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:15:36.244940 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:15:36.261623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:15:36.263926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:15:36.266082 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:15:36.278876 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:15:36.281640 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:15:36.288877 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:15:36.299979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:15:36.302147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:15:36.302904 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:15:36.304641 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:15:36.306181 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:15:36.329782 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 17 00:15:36.329807 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 17 00:15:36.337556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:15:36.346115 kernel: loop2: detected capacity change from 0 to 224512 May 17 00:15:36.381665 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:15:36.393636 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:15:36.403633 kernel: loop5: detected capacity change from 0 to 224512 May 17 00:15:36.409030 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 17 00:15:36.409691 (sd-merge)[1192]: Merged extensions into '/usr'. May 17 00:15:36.414210 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:15:36.414227 systemd[1]: Reloading... May 17 00:15:36.465637 zram_generator::config[1218]: No configuration found. May 17 00:15:36.533756 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:15:36.588013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:15:36.636944 systemd[1]: Reloading finished in 222 ms. May 17 00:15:36.669549 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:15:36.671079 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:15:36.680744 systemd[1]: Starting ensure-sysext.service... May 17 00:15:36.682689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:15:36.690873 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... May 17 00:15:36.690890 systemd[1]: Reloading... May 17 00:15:36.704339 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:15:36.704718 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:15:36.705691 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:15:36.706000 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 17 00:15:36.706089 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 17 00:15:36.709514 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:15:36.709525 systemd-tmpfiles[1256]: Skipping /boot May 17 00:15:36.723772 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:15:36.723784 systemd-tmpfiles[1256]: Skipping /boot May 17 00:15:36.746650 zram_generator::config[1283]: No configuration found. May 17 00:15:36.878223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:15:36.929893 systemd[1]: Reloading finished in 238 ms. May 17 00:15:36.948600 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:15:36.961100 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:15:36.969752 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:15:36.972404 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:15:36.974889 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:15:36.980283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:15:36.984835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:15:36.994926 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:15:36.999555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:36.999987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:15:37.001658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:15:37.004218 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:15:37.007839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:15:37.009246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:15:37.012058 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:15:37.013171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:37.014094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:15:37.014283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:15:37.016179 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:15:37.019732 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:15:37.020783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:15:37.022775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:15:37.023012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:15:37.029139 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:15:37.033009 systemd-udevd[1329]: Using default interface naming scheme 'v255'. May 17 00:15:37.033911 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:37.034601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:15:37.034946 augenrules[1352]: No rules May 17 00:15:37.041956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:15:37.045225 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:15:37.048907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:15:37.050877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:15:37.054419 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:15:37.056709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:37.057782 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:15:37.059534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:15:37.059862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:15:37.062450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:15:37.062977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:15:37.064585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:15:37.068099 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:15:37.069719 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:15:37.070657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:15:37.072348 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:15:37.078933 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:15:37.098582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:37.098743 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:15:37.104778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:15:37.109018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:15:37.111633 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1370) May 17 00:15:37.116764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:15:37.120874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:15:37.122107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:15:37.124597 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:15:37.127696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:15:37.127728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:15:37.128310 systemd[1]: Finished ensure-sysext.service. May 17 00:15:37.131008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:15:37.131216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:15:37.132700 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:15:37.132867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:15:37.143184 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:15:37.151393 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:15:37.151624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:15:37.157554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:15:37.157957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:15:37.165678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:15:37.165753 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:15:37.173831 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:15:37.180386 systemd-resolved[1327]: Positive Trust Anchors: May 17 00:15:37.180714 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:15:37.180786 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:15:37.188570 systemd-resolved[1327]: Defaulting to hostname 'linux'. May 17 00:15:37.190818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:15:37.195822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:15:37.197418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:15:37.203623 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:15:37.207284 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:15:37.207566 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:15:37.207990 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:15:37.209870 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:15:37.213646 kernel: ACPI: button: Power Button [PWRF] May 17 00:15:37.217627 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:15:37.237738 systemd-networkd[1397]: lo: Link UP May 17 00:15:37.237752 systemd-networkd[1397]: lo: Gained carrier May 17 00:15:37.239644 systemd-networkd[1397]: Enumeration completed May 17 00:15:37.241054 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:15:37.241058 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:15:37.242585 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:15:37.243880 systemd-networkd[1397]: eth0: Link UP May 17 00:15:37.243892 systemd-networkd[1397]: eth0: Gained carrier May 17 00:15:37.243905 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:15:37.245270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:15:37.251667 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:15:37.252405 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. May 17 00:15:37.252494 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:15:38.534259 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:15:38.534296 systemd-timesyncd[1409]: Initial clock synchronization to Sat 2025-05-17 00:15:38.534183 UTC. May 17 00:15:38.535573 systemd[1]: Reached target network.target - Network. May 17 00:15:38.536097 systemd-resolved[1327]: Clock change detected. Flushing caches. May 17 00:15:38.537064 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:15:38.546845 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:15:38.570050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:15:38.584023 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:15:38.643810 kernel: kvm_amd: TSC scaling supported May 17 00:15:38.643940 kernel: kvm_amd: Nested Virtualization enabled May 17 00:15:38.643969 kernel: kvm_amd: Nested Paging enabled May 17 00:15:38.644051 kernel: kvm_amd: LBR virtualization supported May 17 00:15:38.644088 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 17 00:15:38.644117 kernel: kvm_amd: Virtual GIF supported May 17 00:15:38.658251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:15:38.665702 kernel: EDAC MC: Ver: 3.0.0 May 17 00:15:38.695463 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:15:38.710861 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:15:38.719246 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:15:38.749027 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:15:38.750564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:15:38.751698 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:15:38.752872 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:15:38.754147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:15:38.755592 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:15:38.756876 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:15:38.758142 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:15:38.759401 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:15:38.759428 systemd[1]: Reached target paths.target - Path Units. May 17 00:15:38.760390 systemd[1]: Reached target timers.target - Timer Units. May 17 00:15:38.762092 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:15:38.764784 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:15:38.774090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:15:38.776371 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:15:38.777927 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:15:38.779087 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:15:38.780052 systemd[1]: Reached target basic.target - Basic System. May 17 00:15:38.781002 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:15:38.781030 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:15:38.781978 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:15:38.784064 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:15:38.788760 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:15:38.792695 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:15:38.793885 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:15:38.795774 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:15:38.796585 jq[1435]: false May 17 00:15:38.797830 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:15:38.802409 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:15:38.804655 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:15:38.810833 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:15:38.810988 extend-filesystems[1436]: Found loop3 May 17 00:15:38.814327 extend-filesystems[1436]: Found loop4 May 17 00:15:38.814327 extend-filesystems[1436]: Found loop5 May 17 00:15:38.814327 extend-filesystems[1436]: Found sr0 May 17 00:15:38.814327 extend-filesystems[1436]: Found vda May 17 00:15:38.814327 extend-filesystems[1436]: Found vda1 May 17 00:15:38.814327 extend-filesystems[1436]: Found vda2 May 17 00:15:38.814327 extend-filesystems[1436]: Found vda3 May 17 00:15:38.814327 extend-filesystems[1436]: Found usr May 17 00:15:38.814327 extend-filesystems[1436]: Found vda4 May 17 00:15:38.814327 extend-filesystems[1436]: Found vda6 May 17 00:15:38.814327 extend-filesystems[1436]: Found vda7 May 17 00:15:38.814327 extend-filesystems[1436]: Found vda9 May 17 00:15:38.814327 extend-filesystems[1436]: Checking size of /dev/vda9 May 17 00:15:38.819471 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:15:38.829147 dbus-daemon[1434]: [system] SELinux support is enabled May 17 00:15:38.839901 extend-filesystems[1436]: Resized partition /dev/vda9 May 17 00:15:38.845835 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1370) May 17 00:15:38.820365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:15:38.820909 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:15:38.830948 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:15:38.847813 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) May 17 00:15:38.847828 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:15:38.848946 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:15:38.856083 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:15:38.855594 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:15:38.859208 update_engine[1451]: I20250517 00:15:38.858916 1451 main.cc:92] Flatcar Update Engine starting May 17 00:15:38.862497 jq[1457]: true May 17 00:15:38.867208 update_engine[1451]: I20250517 00:15:38.862237 1451 update_check_scheduler.cc:74] Next update check in 11m50s May 17 00:15:38.867158 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:15:38.867367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:15:38.867718 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:15:38.867913 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:15:38.871089 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:15:38.871300 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:15:38.883756 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:15:38.885636 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:15:38.903496 systemd[1]: Started update-engine.service - Update Engine. May 17 00:15:38.906652 jq[1461]: true May 17 00:15:38.906764 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:15:38.906786 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:15:38.908259 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:15:38.908279 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:15:38.910747 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:15:38.910747 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:15:38.910747 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:15:38.916738 extend-filesystems[1436]: Resized filesystem in /dev/vda9 May 17 00:15:38.919204 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:15:38.920029 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:15:38.921157 systemd-logind[1446]: New seat seat0. May 17 00:15:38.923839 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:15:38.925501 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:15:38.925949 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:15:38.927940 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:15:38.935346 tar[1460]: linux-amd64/LICENSE May 17 00:15:38.935770 tar[1460]: linux-amd64/helm May 17 00:15:38.960054 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:15:38.965172 bash[1490]: Updated "/home/core/.ssh/authorized_keys" May 17 00:15:38.966577 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:15:38.969670 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 00:15:38.974028 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:15:39.000729 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:15:39.008944 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:15:39.017851 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:15:39.018116 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:15:39.024886 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:15:39.037523 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:15:39.047683 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:15:39.050119 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:15:39.051566 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:15:39.079567 containerd[1462]: time="2025-05-17T00:15:39.079415026Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:15:39.102878 containerd[1462]: time="2025-05-17T00:15:39.102845305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:15:39.104632 containerd[1462]: time="2025-05-17T00:15:39.104577313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:15:39.104632 containerd[1462]: time="2025-05-17T00:15:39.104601058Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:15:39.104632 containerd[1462]: time="2025-05-17T00:15:39.104614633Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:15:39.104827 containerd[1462]: time="2025-05-17T00:15:39.104802245Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:15:39.104827 containerd[1462]: time="2025-05-17T00:15:39.104818977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:15:39.104973 containerd[1462]: time="2025-05-17T00:15:39.104878959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:15:39.104973 containerd[1462]: time="2025-05-17T00:15:39.104895991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:15:39.105109 containerd[1462]: time="2025-05-17T00:15:39.105078063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:15:39.105109 containerd[1462]: time="2025-05-17T00:15:39.105097719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:15:39.105151 containerd[1462]: time="2025-05-17T00:15:39.105118538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:15:39.105151 containerd[1462]: time="2025-05-17T00:15:39.105128217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:15:39.105286 containerd[1462]: time="2025-05-17T00:15:39.105220189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:15:39.105481 containerd[1462]: time="2025-05-17T00:15:39.105457544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:15:39.105602 containerd[1462]: time="2025-05-17T00:15:39.105579182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:15:39.105602 containerd[1462]: time="2025-05-17T00:15:39.105595062Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:15:39.105753 containerd[1462]: time="2025-05-17T00:15:39.105712543Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:15:39.105783 containerd[1462]: time="2025-05-17T00:15:39.105771864Z" level=info msg="metadata content store policy set" policy=shared May 17 00:15:39.111353 containerd[1462]: time="2025-05-17T00:15:39.111308138Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:15:39.111353 containerd[1462]: time="2025-05-17T00:15:39.111347172Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:15:39.111353 containerd[1462]: time="2025-05-17T00:15:39.111362060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:15:39.111506 containerd[1462]: time="2025-05-17T00:15:39.111380534Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:15:39.111506 containerd[1462]: time="2025-05-17T00:15:39.111393989Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:15:39.111543 containerd[1462]: time="2025-05-17T00:15:39.111506330Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:15:39.111728 containerd[1462]: time="2025-05-17T00:15:39.111704712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:15:39.111829 containerd[1462]: time="2025-05-17T00:15:39.111806814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:15:39.111829 containerd[1462]: time="2025-05-17T00:15:39.111826491Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:15:39.111867 containerd[1462]: time="2025-05-17T00:15:39.111839184Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:15:39.111867 containerd[1462]: time="2025-05-17T00:15:39.111852139Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:15:39.111867 containerd[1462]: time="2025-05-17T00:15:39.111863731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:15:39.111929 containerd[1462]: time="2025-05-17T00:15:39.111874611Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:15:39.111929 containerd[1462]: time="2025-05-17T00:15:39.111888216Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:15:39.111929 containerd[1462]: time="2025-05-17T00:15:39.111901030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:15:39.111929 containerd[1462]: time="2025-05-17T00:15:39.111912472Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:15:39.111929 containerd[1462]: time="2025-05-17T00:15:39.111923833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.111934002Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.111956094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.111968878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.111981301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.111993364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.112005216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.112018341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.112029211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.112040993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.112053296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112063 containerd[1462]: time="2025-05-17T00:15:39.112066421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112077652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112090045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112109542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112125221Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112143365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112155358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112165907Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112207265Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112223966Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112234346Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112245707Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112254994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112266 containerd[1462]: time="2025-05-17T00:15:39.112265885Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:15:39.112491 containerd[1462]: time="2025-05-17T00:15:39.112281194Z" level=info msg="NRI interface is disabled by configuration." May 17 00:15:39.112491 containerd[1462]: time="2025-05-17T00:15:39.112290651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:15:39.112576 containerd[1462]: time="2025-05-17T00:15:39.112529109Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:15:39.112576 containerd[1462]: time="2025-05-17T00:15:39.112579593Z" level=info msg="Connect containerd service" May 17 00:15:39.112770 containerd[1462]: time="2025-05-17T00:15:39.112615140Z" level=info msg="using legacy CRI server" May 17 00:15:39.112770 containerd[1462]: time="2025-05-17T00:15:39.112622273Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:15:39.112770 containerd[1462]: time="2025-05-17T00:15:39.112730276Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:15:39.113386 containerd[1462]: time="2025-05-17T00:15:39.113358915Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:15:39.113634 containerd[1462]: time="2025-05-17T00:15:39.113525898Z" level=info msg="Start subscribing containerd event" May 17 00:15:39.113664 containerd[1462]: time="2025-05-17T00:15:39.113634241Z" level=info msg="Start recovering state" May 17 00:15:39.113759 containerd[1462]: time="2025-05-17T00:15:39.113729650Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:15:39.113785 containerd[1462]: time="2025-05-17T00:15:39.113770838Z" level=info msg="Start event monitor" May 17 00:15:39.113805 containerd[1462]: time="2025-05-17T00:15:39.113787329Z" level=info msg="Start snapshots syncer" May 17 00:15:39.113805 containerd[1462]: time="2025-05-17T00:15:39.113799441Z" level=info msg="Start cni network conf syncer for default" May 17 00:15:39.113849 containerd[1462]: time="2025-05-17T00:15:39.113807887Z" level=info msg="Start streaming server" May 17 00:15:39.113979 containerd[1462]: time="2025-05-17T00:15:39.113818187Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:15:39.114074 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:15:39.115239 containerd[1462]: time="2025-05-17T00:15:39.115209496Z" level=info msg="containerd successfully booted in 0.037021s" May 17 00:15:39.333774 tar[1460]: linux-amd64/README.md May 17 00:15:39.351568 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:15:39.854926 systemd-networkd[1397]: eth0: Gained IPv6LL May 17 00:15:39.858277 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:15:39.860106 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:15:39.874012 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 17 00:15:39.876853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:15:39.879017 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:15:39.898807 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 00:15:39.899048 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 17 00:15:39.900639 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:15:39.902901 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:15:40.574847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:15:40.576510 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:15:40.577910 systemd[1]: Startup finished in 734ms (kernel) + 5.737s (initrd) + 3.907s (userspace) = 10.379s. May 17 00:15:40.589834 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:15:40.994998 kubelet[1548]: E0517 00:15:40.994870 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:15:40.998985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:15:40.999206 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:15:40.999532 systemd[1]: kubelet.service: Consumed 1.003s CPU time. May 17 00:15:43.952924 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:15:43.954177 systemd[1]: Started sshd@0-10.0.0.66:22-10.0.0.1:54332.service - OpenSSH per-connection server daemon (10.0.0.1:54332). May 17 00:15:43.998381 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 54332 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:15:44.000514 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:44.009203 systemd-logind[1446]: New session 1 of user core. May 17 00:15:44.010500 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:15:44.016920 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:15:44.028294 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:15:44.031097 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:15:44.039045 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:15:44.153646 systemd[1566]: Queued start job for default target default.target. May 17 00:15:44.166108 systemd[1566]: Created slice app.slice - User Application Slice. May 17 00:15:44.166138 systemd[1566]: Reached target paths.target - Paths. May 17 00:15:44.166152 systemd[1566]: Reached target timers.target - Timers. May 17 00:15:44.167809 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:15:44.179535 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:15:44.179692 systemd[1566]: Reached target sockets.target - Sockets. May 17 00:15:44.179710 systemd[1566]: Reached target basic.target - Basic System. May 17 00:15:44.179750 systemd[1566]: Reached target default.target - Main User Target. May 17 00:15:44.179785 systemd[1566]: Startup finished in 132ms. May 17 00:15:44.180191 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:15:44.181735 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:15:44.342451 systemd[1]: Started sshd@1-10.0.0.66:22-10.0.0.1:54344.service - OpenSSH per-connection server daemon (10.0.0.1:54344). May 17 00:15:44.382234 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 54344 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:15:44.383975 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:44.388149 systemd-logind[1446]: New session 2 of user core. May 17 00:15:44.397852 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:15:44.451848 sshd[1577]: pam_unix(sshd:session): session closed for user core May 17 00:15:44.466300 systemd[1]: sshd@1-10.0.0.66:22-10.0.0.1:54344.service: Deactivated successfully. May 17 00:15:44.467905 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:15:44.469403 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. May 17 00:15:44.476951 systemd[1]: Started sshd@2-10.0.0.66:22-10.0.0.1:54360.service - OpenSSH per-connection server daemon (10.0.0.1:54360). May 17 00:15:44.477731 systemd-logind[1446]: Removed session 2. May 17 00:15:44.508270 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 54360 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:15:44.509665 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:44.513242 systemd-logind[1446]: New session 3 of user core. May 17 00:15:44.522800 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:15:44.571533 sshd[1584]: pam_unix(sshd:session): session closed for user core May 17 00:15:44.578124 systemd[1]: sshd@2-10.0.0.66:22-10.0.0.1:54360.service: Deactivated successfully. May 17 00:15:44.579794 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:15:44.581271 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. May 17 00:15:44.587951 systemd[1]: Started sshd@3-10.0.0.66:22-10.0.0.1:54366.service - OpenSSH per-connection server daemon (10.0.0.1:54366). May 17 00:15:44.588735 systemd-logind[1446]: Removed session 3. May 17 00:15:44.617365 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 54366 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:15:44.618876 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:44.622514 systemd-logind[1446]: New session 4 of user core. May 17 00:15:44.631809 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:15:44.685245 sshd[1591]: pam_unix(sshd:session): session closed for user core May 17 00:15:44.700442 systemd[1]: sshd@3-10.0.0.66:22-10.0.0.1:54366.service: Deactivated successfully. May 17 00:15:44.702212 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:15:44.703791 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. May 17 00:15:44.704999 systemd[1]: Started sshd@4-10.0.0.66:22-10.0.0.1:54368.service - OpenSSH per-connection server daemon (10.0.0.1:54368). May 17 00:15:44.705708 systemd-logind[1446]: Removed session 4. May 17 00:15:44.738002 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 54368 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:15:44.739467 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:44.743312 systemd-logind[1446]: New session 5 of user core. May 17 00:15:44.752799 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:15:44.810406 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:15:44.810755 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:15:44.835722 sudo[1601]: pam_unix(sudo:session): session closed for user root May 17 00:15:44.837647 sshd[1598]: pam_unix(sshd:session): session closed for user core May 17 00:15:44.853557 systemd[1]: sshd@4-10.0.0.66:22-10.0.0.1:54368.service: Deactivated successfully. May 17 00:15:44.855355 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:15:44.857090 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. May 17 00:15:44.877910 systemd[1]: Started sshd@5-10.0.0.66:22-10.0.0.1:54374.service - OpenSSH per-connection server daemon (10.0.0.1:54374). May 17 00:15:44.878885 systemd-logind[1446]: Removed session 5. May 17 00:15:44.912409 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 54374 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:15:44.914208 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:44.917942 systemd-logind[1446]: New session 6 of user core. May 17 00:15:44.927797 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:15:44.984467 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:15:44.984964 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:15:44.989240 sudo[1610]: pam_unix(sudo:session): session closed for user root May 17 00:15:44.995905 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:15:44.996235 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:15:45.020905 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:15:45.022967 auditctl[1613]: No rules May 17 00:15:45.024635 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:15:45.025013 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:15:45.027194 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:15:45.069070 augenrules[1631]: No rules May 17 00:15:45.071078 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:15:45.072410 sudo[1609]: pam_unix(sudo:session): session closed for user root May 17 00:15:45.074538 sshd[1606]: pam_unix(sshd:session): session closed for user core May 17 00:15:45.086833 systemd[1]: sshd@5-10.0.0.66:22-10.0.0.1:54374.service: Deactivated successfully. May 17 00:15:45.088573 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:15:45.090339 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. May 17 00:15:45.091594 systemd[1]: Started sshd@6-10.0.0.66:22-10.0.0.1:54388.service - OpenSSH per-connection server daemon (10.0.0.1:54388). May 17 00:15:45.092654 systemd-logind[1446]: Removed session 6. May 17 00:15:45.128302 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 54388 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:15:45.130020 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:15:45.134083 systemd-logind[1446]: New session 7 of user core. May 17 00:15:45.143788 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:15:45.199087 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:15:45.199505 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:15:45.474882 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:15:45.475199 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:15:45.749706 dockerd[1660]: time="2025-05-17T00:15:45.749551986Z" level=info msg="Starting up" May 17 00:15:46.055572 systemd[1]: var-lib-docker-metacopy\x2dcheck577415826-merged.mount: Deactivated successfully. May 17 00:15:46.081466 dockerd[1660]: time="2025-05-17T00:15:46.081412291Z" level=info msg="Loading containers: start." May 17 00:15:46.183699 kernel: Initializing XFRM netlink socket May 17 00:15:46.258826 systemd-networkd[1397]: docker0: Link UP May 17 00:15:46.283083 dockerd[1660]: time="2025-05-17T00:15:46.283047642Z" level=info msg="Loading containers: done." May 17 00:15:46.296488 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2128883949-merged.mount: Deactivated successfully. May 17 00:15:46.299272 dockerd[1660]: time="2025-05-17T00:15:46.299223032Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:15:46.299388 dockerd[1660]: time="2025-05-17T00:15:46.299333700Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:15:46.299476 dockerd[1660]: time="2025-05-17T00:15:46.299450459Z" level=info msg="Daemon has completed initialization" May 17 00:15:46.336247 dockerd[1660]: time="2025-05-17T00:15:46.335323887Z" level=info msg="API listen on /run/docker.sock" May 17 00:15:46.336019 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:15:46.999464 containerd[1462]: time="2025-05-17T00:15:46.999415744Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:15:47.686317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807024780.mount: Deactivated successfully. May 17 00:15:48.531467 containerd[1462]: time="2025-05-17T00:15:48.531407522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:48.532334 containerd[1462]: time="2025-05-17T00:15:48.532301930Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 17 00:15:48.533595 containerd[1462]: time="2025-05-17T00:15:48.533560210Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:48.536184 containerd[1462]: time="2025-05-17T00:15:48.536158073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:48.537075 containerd[1462]: time="2025-05-17T00:15:48.537016834Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.537547289s" May 17 00:15:48.537075 containerd[1462]: time="2025-05-17T00:15:48.537065144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:15:48.537693 containerd[1462]: time="2025-05-17T00:15:48.537652175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:15:49.609122 containerd[1462]: time="2025-05-17T00:15:49.609058487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:49.610042 containerd[1462]: time="2025-05-17T00:15:49.609984574Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 17 00:15:49.611497 containerd[1462]: time="2025-05-17T00:15:49.611464160Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:49.614447 containerd[1462]: time="2025-05-17T00:15:49.614408422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:49.615397 containerd[1462]: time="2025-05-17T00:15:49.615362762Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.077683747s" May 17 00:15:49.615397 containerd[1462]: time="2025-05-17T00:15:49.615390124Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:15:49.615806 containerd[1462]: time="2025-05-17T00:15:49.615780376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:15:50.811210 containerd[1462]: time="2025-05-17T00:15:50.811140179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:50.812088 containerd[1462]: time="2025-05-17T00:15:50.812017435Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 17 00:15:50.813467 containerd[1462]: time="2025-05-17T00:15:50.813419054Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:50.816055 containerd[1462]: time="2025-05-17T00:15:50.816024120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:50.817075 containerd[1462]: time="2025-05-17T00:15:50.817043372Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.201234312s" May 17 00:15:50.817113 containerd[1462]: time="2025-05-17T00:15:50.817074520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:15:50.817590 containerd[1462]: time="2025-05-17T00:15:50.817561844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:15:51.054371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:15:51.063815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:15:51.231347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:15:51.236567 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:15:51.392727 kubelet[1878]: E0517 00:15:51.392537 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:15:51.399225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:15:51.399437 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:15:52.061872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235966772.mount: Deactivated successfully. May 17 00:15:52.990118 containerd[1462]: time="2025-05-17T00:15:52.990038309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:52.990925 containerd[1462]: time="2025-05-17T00:15:52.990881050Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 17 00:15:52.992398 containerd[1462]: time="2025-05-17T00:15:52.992339636Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:52.994774 containerd[1462]: time="2025-05-17T00:15:52.994732364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:52.995241 containerd[1462]: time="2025-05-17T00:15:52.995198017Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.177451587s" May 17 00:15:52.995241 containerd[1462]: time="2025-05-17T00:15:52.995232282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:15:52.995738 containerd[1462]: time="2025-05-17T00:15:52.995706601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:15:53.577416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267032848.mount: Deactivated successfully. May 17 00:15:54.244880 containerd[1462]: time="2025-05-17T00:15:54.244820582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:54.246157 containerd[1462]: time="2025-05-17T00:15:54.246090514Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:15:54.247396 containerd[1462]: time="2025-05-17T00:15:54.247366257Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:54.250004 containerd[1462]: time="2025-05-17T00:15:54.249970792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:54.251442 containerd[1462]: time="2025-05-17T00:15:54.251371390Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.255456979s" May 17 00:15:54.251442 containerd[1462]: time="2025-05-17T00:15:54.251424189Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:15:54.251902 containerd[1462]: time="2025-05-17T00:15:54.251868572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:15:54.710805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545132360.mount: Deactivated successfully. May 17 00:15:54.715910 containerd[1462]: time="2025-05-17T00:15:54.715868854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:54.716659 containerd[1462]: time="2025-05-17T00:15:54.716603201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:15:54.717724 containerd[1462]: time="2025-05-17T00:15:54.717664903Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:54.719876 containerd[1462]: time="2025-05-17T00:15:54.719843589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:54.720520 containerd[1462]: time="2025-05-17T00:15:54.720478781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.569883ms" May 17 00:15:54.720520 containerd[1462]: time="2025-05-17T00:15:54.720515279Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:15:54.721041 containerd[1462]: time="2025-05-17T00:15:54.721001912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:15:55.232769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622959169.mount: Deactivated successfully. May 17 00:15:56.846471 containerd[1462]: time="2025-05-17T00:15:56.846392688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:56.847431 containerd[1462]: time="2025-05-17T00:15:56.847361926Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 17 00:15:56.848808 containerd[1462]: time="2025-05-17T00:15:56.848778763Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:56.851721 containerd[1462]: time="2025-05-17T00:15:56.851655629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:15:56.852871 containerd[1462]: time="2025-05-17T00:15:56.852830263Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.131796811s" May 17 00:15:56.852920 containerd[1462]: time="2025-05-17T00:15:56.852869456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:15:59.363191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:15:59.372888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:15:59.397026 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit session-7.scope)... May 17 00:15:59.397044 systemd[1]: Reloading... May 17 00:15:59.474705 zram_generator::config[2074]: No configuration found. May 17 00:15:59.700819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:15:59.777422 systemd[1]: Reloading finished in 379 ms. May 17 00:15:59.827078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:15:59.831052 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:15:59.831302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:15:59.832849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:15:59.994444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:15:59.998793 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:16:00.037030 kubelet[2124]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:16:00.037030 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:16:00.037030 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:16:00.037399 kubelet[2124]: I0517 00:16:00.037089 2124 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:16:00.299398 kubelet[2124]: I0517 00:16:00.299312 2124 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:16:00.299398 kubelet[2124]: I0517 00:16:00.299335 2124 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:16:00.299579 kubelet[2124]: I0517 00:16:00.299562 2124 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:16:00.318749 kubelet[2124]: E0517 00:16:00.318697 2124 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:00.319936 kubelet[2124]: I0517 00:16:00.319906 2124 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:16:00.326746 kubelet[2124]: E0517 00:16:00.326707 2124 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:16:00.326746 kubelet[2124]: I0517 00:16:00.326733 2124 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:16:00.331509 kubelet[2124]: I0517 00:16:00.331480 2124 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:16:00.332552 kubelet[2124]: I0517 00:16:00.332519 2124 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:16:00.332743 kubelet[2124]: I0517 00:16:00.332546 2124 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:16:00.332835 kubelet[2124]: I0517 00:16:00.332744 2124 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:16:00.332835 kubelet[2124]: I0517 00:16:00.332753 2124 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:16:00.332879 kubelet[2124]: I0517 00:16:00.332874 2124 state_mem.go:36] "Initialized new in-memory state store" May 17 00:16:00.335214 kubelet[2124]: I0517 00:16:00.335194 2124 kubelet.go:446] "Attempting to sync node with API server" May 17 00:16:00.336539 kubelet[2124]: I0517 00:16:00.336507 2124 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:16:00.336539 kubelet[2124]: I0517 00:16:00.336533 2124 kubelet.go:352] "Adding apiserver pod source" May 17 00:16:00.336539 kubelet[2124]: I0517 00:16:00.336543 2124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:16:00.339082 kubelet[2124]: W0517 00:16:00.338906 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:00.339082 kubelet[2124]: E0517 00:16:00.338972 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:00.339750 kubelet[2124]: I0517 00:16:00.339703 2124 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:16:00.340082 kubelet[2124]: I0517 00:16:00.340064 2124 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:16:00.340153 kubelet[2124]: W0517 00:16:00.340127 2124 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:16:00.340273 kubelet[2124]: W0517 00:16:00.340235 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:00.340318 kubelet[2124]: E0517 00:16:00.340284 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:00.342266 kubelet[2124]: I0517 00:16:00.342241 2124 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:16:00.342308 kubelet[2124]: I0517 00:16:00.342275 2124 server.go:1287] "Started kubelet" May 17 00:16:00.344119 kubelet[2124]: I0517 00:16:00.344061 2124 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:16:00.345392 kubelet[2124]: I0517 00:16:00.344384 2124 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:16:00.345392 kubelet[2124]: I0517 00:16:00.344440 2124 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:16:00.345392 kubelet[2124]: I0517 00:16:00.344606 2124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:16:00.345392 kubelet[2124]: I0517 00:16:00.345255 2124 server.go:479] "Adding debug handlers to kubelet server" May 17 00:16:00.346576 kubelet[2124]: I0517 00:16:00.346295 2124 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:16:00.346620 kubelet[2124]: E0517 00:16:00.346605 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:00.346652 kubelet[2124]: I0517 00:16:00.346630 2124 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:16:00.346829 kubelet[2124]: I0517 00:16:00.346809 2124 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:16:00.347453 kubelet[2124]: I0517 00:16:00.346869 2124 reconciler.go:26] "Reconciler: start to sync state" May 17 00:16:00.347453 kubelet[2124]: W0517 00:16:00.347204 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:00.347453 kubelet[2124]: E0517 00:16:00.347242 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:00.348899 kubelet[2124]: E0517 00:16:00.348221 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="200ms" May 17 00:16:00.349798 kubelet[2124]: E0517 00:16:00.349324 2124 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:16:00.349923 kubelet[2124]: E0517 00:16:00.348787 2124 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.66:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.66:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1840284b8b2ae83d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:16:00.342255677 +0000 UTC m=+0.339755265,LastTimestamp:2025-05-17 00:16:00.342255677 +0000 UTC m=+0.339755265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:16:00.351045 kubelet[2124]: I0517 00:16:00.350100 2124 factory.go:221] Registration of the systemd container factory successfully May 17 00:16:00.351045 kubelet[2124]: I0517 00:16:00.350165 2124 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:16:00.351539 kubelet[2124]: I0517 00:16:00.351519 2124 factory.go:221] Registration of the containerd container factory successfully May 17 00:16:00.362244 kubelet[2124]: I0517 00:16:00.362199 2124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:16:00.364254 kubelet[2124]: I0517 00:16:00.364233 2124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:16:00.364354 kubelet[2124]: I0517 00:16:00.364333 2124 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:16:00.364406 kubelet[2124]: I0517 00:16:00.364355 2124 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:16:00.364406 kubelet[2124]: I0517 00:16:00.364363 2124 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:16:00.364695 kubelet[2124]: E0517 00:16:00.364516 2124 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:16:00.365319 kubelet[2124]: I0517 00:16:00.365305 2124 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:16:00.365455 kubelet[2124]: I0517 00:16:00.365381 2124 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:16:00.365455 kubelet[2124]: I0517 00:16:00.365398 2124 state_mem.go:36] "Initialized new in-memory state store" May 17 00:16:00.366059 kubelet[2124]: W0517 00:16:00.366005 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:00.366441 kubelet[2124]: E0517 00:16:00.366072 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:00.447313 kubelet[2124]: E0517 00:16:00.447279 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:00.465493 kubelet[2124]: E0517 00:16:00.465469 2124 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:16:00.547464 kubelet[2124]: E0517 00:16:00.547412 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:00.548984 kubelet[2124]: E0517 00:16:00.548948 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="400ms" May 17 00:16:00.648319 kubelet[2124]: E0517 00:16:00.648277 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:00.666569 kubelet[2124]: E0517 00:16:00.666517 2124 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:16:00.677659 kubelet[2124]: I0517 00:16:00.677629 2124 policy_none.go:49] "None policy: Start" May 17 00:16:00.677713 kubelet[2124]: I0517 00:16:00.677668 2124 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:16:00.677713 kubelet[2124]: I0517 00:16:00.677701 2124 state_mem.go:35] "Initializing new in-memory state store" May 17 00:16:00.682989 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:16:00.699767 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:16:00.702655 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:16:00.713529 kubelet[2124]: I0517 00:16:00.713497 2124 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:16:00.713890 kubelet[2124]: I0517 00:16:00.713721 2124 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:16:00.713890 kubelet[2124]: I0517 00:16:00.713735 2124 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:16:00.714004 kubelet[2124]: I0517 00:16:00.713928 2124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:16:00.714669 kubelet[2124]: E0517 00:16:00.714610 2124 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:16:00.714669 kubelet[2124]: E0517 00:16:00.714642 2124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:16:00.815423 kubelet[2124]: I0517 00:16:00.815379 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:16:00.815820 kubelet[2124]: E0517 00:16:00.815787 2124 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" May 17 00:16:00.950548 kubelet[2124]: E0517 00:16:00.950415 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="800ms" May 17 00:16:01.017387 kubelet[2124]: I0517 00:16:01.017367 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:16:01.017616 kubelet[2124]: E0517 00:16:01.017583 2124 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" May 17 00:16:01.074142 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 17 00:16:01.093060 kubelet[2124]: E0517 00:16:01.093036 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:01.095790 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 17 00:16:01.097535 kubelet[2124]: E0517 00:16:01.097512 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:01.099925 systemd[1]: Created slice kubepods-burstable-pod36abe138c9ef396f5d9d69068b14ee89.slice - libcontainer container kubepods-burstable-pod36abe138c9ef396f5d9d69068b14ee89.slice. May 17 00:16:01.101244 kubelet[2124]: E0517 00:16:01.101221 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:01.151543 kubelet[2124]: I0517 00:16:01.151513 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:16:01.151592 kubelet[2124]: I0517 00:16:01.151544 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36abe138c9ef396f5d9d69068b14ee89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36abe138c9ef396f5d9d69068b14ee89\") " pod="kube-system/kube-apiserver-localhost" May 17 00:16:01.151592 kubelet[2124]: I0517 00:16:01.151585 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36abe138c9ef396f5d9d69068b14ee89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36abe138c9ef396f5d9d69068b14ee89\") " pod="kube-system/kube-apiserver-localhost" May 17 00:16:01.151635 kubelet[2124]: I0517 00:16:01.151606 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:01.151635 kubelet[2124]: I0517 00:16:01.151621 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:01.151706 kubelet[2124]: I0517 00:16:01.151645 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:01.151706 kubelet[2124]: I0517 00:16:01.151666 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:01.151753 kubelet[2124]: I0517 00:16:01.151704 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36abe138c9ef396f5d9d69068b14ee89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36abe138c9ef396f5d9d69068b14ee89\") " pod="kube-system/kube-apiserver-localhost" May 17 00:16:01.151753 kubelet[2124]: I0517 00:16:01.151724 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:01.216149 kubelet[2124]: W0517 00:16:01.216069 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:01.216149 kubelet[2124]: E0517 00:16:01.216113 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:01.218616 kubelet[2124]: W0517 00:16:01.218581 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:01.218665 kubelet[2124]: E0517 00:16:01.218621 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:01.228421 kubelet[2124]: W0517 00:16:01.228375 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:01.228421 kubelet[2124]: E0517 00:16:01.228415 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:01.393779 kubelet[2124]: E0517 00:16:01.393754 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:01.394362 containerd[1462]: time="2025-05-17T00:16:01.394322434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 17 00:16:01.398504 kubelet[2124]: E0517 00:16:01.398481 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:01.398791 containerd[1462]: time="2025-05-17T00:16:01.398761671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 17 00:16:01.402317 kubelet[2124]: E0517 00:16:01.402295 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:01.402555 containerd[1462]: time="2025-05-17T00:16:01.402537273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36abe138c9ef396f5d9d69068b14ee89,Namespace:kube-system,Attempt:0,}" May 17 00:16:01.419804 kubelet[2124]: I0517 00:16:01.419782 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:16:01.420038 kubelet[2124]: E0517 00:16:01.420007 2124 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" May 17 00:16:01.751327 kubelet[2124]: E0517 00:16:01.751271 2124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="1.6s" May 17 00:16:01.846382 kubelet[2124]: W0517 00:16:01.846332 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:01.846423 kubelet[2124]: E0517 00:16:01.846391 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:02.221269 kubelet[2124]: I0517 00:16:02.221230 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:16:02.221750 kubelet[2124]: E0517 00:16:02.221603 2124 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" May 17 00:16:02.496938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628383562.mount: Deactivated successfully. May 17 00:16:02.504045 kubelet[2124]: E0517 00:16:02.503992 2124 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:02.505946 containerd[1462]: time="2025-05-17T00:16:02.505901228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:16:02.506994 containerd[1462]: time="2025-05-17T00:16:02.506958541Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:16:02.508175 containerd[1462]: time="2025-05-17T00:16:02.508115081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:16:02.509370 containerd[1462]: time="2025-05-17T00:16:02.509323357Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:16:02.510313 containerd[1462]: time="2025-05-17T00:16:02.510271596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:16:02.511381 containerd[1462]: time="2025-05-17T00:16:02.511355619Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:16:02.512652 containerd[1462]: time="2025-05-17T00:16:02.512600554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:16:02.516050 containerd[1462]: time="2025-05-17T00:16:02.516015650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:16:02.517633 containerd[1462]: time="2025-05-17T00:16:02.517595143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.118790171s" May 17 00:16:02.518310 containerd[1462]: time="2025-05-17T00:16:02.518284646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.115706827s" May 17 00:16:02.519033 containerd[1462]: time="2025-05-17T00:16:02.519001030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.124594688s" May 17 00:16:02.650349 containerd[1462]: time="2025-05-17T00:16:02.649898887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:02.651698 containerd[1462]: time="2025-05-17T00:16:02.651156696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:02.651698 containerd[1462]: time="2025-05-17T00:16:02.651214945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:02.651698 containerd[1462]: time="2025-05-17T00:16:02.651229452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:02.651698 containerd[1462]: time="2025-05-17T00:16:02.651316195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:02.651912 containerd[1462]: time="2025-05-17T00:16:02.651734399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:02.651912 containerd[1462]: time="2025-05-17T00:16:02.651775266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:02.651912 containerd[1462]: time="2025-05-17T00:16:02.651847992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:02.653789 containerd[1462]: time="2025-05-17T00:16:02.653436171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:02.653789 containerd[1462]: time="2025-05-17T00:16:02.653480595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:02.653789 containerd[1462]: time="2025-05-17T00:16:02.653492086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:02.653789 containerd[1462]: time="2025-05-17T00:16:02.653569241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:02.674849 systemd[1]: Started cri-containerd-9d9814f89a0efc88021d24cf64c8ffd671e7bafef9a699a154040cf326e2d81d.scope - libcontainer container 9d9814f89a0efc88021d24cf64c8ffd671e7bafef9a699a154040cf326e2d81d. May 17 00:16:02.679541 systemd[1]: Started cri-containerd-dd2fce181fe52b193bc026630efd50bda541428e41e3a74795ee9dc673d50cc2.scope - libcontainer container dd2fce181fe52b193bc026630efd50bda541428e41e3a74795ee9dc673d50cc2. May 17 00:16:02.681197 systemd[1]: Started cri-containerd-eb5f2837ee18d199e466286b9d0ce9557562e042e3df85962acc85825709a6fd.scope - libcontainer container eb5f2837ee18d199e466286b9d0ce9557562e042e3df85962acc85825709a6fd. May 17 00:16:02.719779 containerd[1462]: time="2025-05-17T00:16:02.719657484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d9814f89a0efc88021d24cf64c8ffd671e7bafef9a699a154040cf326e2d81d\"" May 17 00:16:02.720731 kubelet[2124]: E0517 00:16:02.720703 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:02.721603 containerd[1462]: time="2025-05-17T00:16:02.721459313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36abe138c9ef396f5d9d69068b14ee89,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd2fce181fe52b193bc026630efd50bda541428e41e3a74795ee9dc673d50cc2\"" May 17 00:16:02.724074 containerd[1462]: time="2025-05-17T00:16:02.723889892Z" level=info msg="CreateContainer within sandbox \"9d9814f89a0efc88021d24cf64c8ffd671e7bafef9a699a154040cf326e2d81d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:16:02.724130 kubelet[2124]: E0517 00:16:02.724030 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:02.724484 containerd[1462]: time="2025-05-17T00:16:02.724462707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb5f2837ee18d199e466286b9d0ce9557562e042e3df85962acc85825709a6fd\"" May 17 00:16:02.726075 kubelet[2124]: E0517 00:16:02.726046 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:02.727030 containerd[1462]: time="2025-05-17T00:16:02.726997371Z" level=info msg="CreateContainer within sandbox \"dd2fce181fe52b193bc026630efd50bda541428e41e3a74795ee9dc673d50cc2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:16:02.727933 containerd[1462]: time="2025-05-17T00:16:02.727898061Z" level=info msg="CreateContainer within sandbox \"eb5f2837ee18d199e466286b9d0ce9557562e042e3df85962acc85825709a6fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:16:02.745437 containerd[1462]: time="2025-05-17T00:16:02.745373670Z" level=info msg="CreateContainer within sandbox \"9d9814f89a0efc88021d24cf64c8ffd671e7bafef9a699a154040cf326e2d81d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e999453612e166e5f0e1daffc650cabae956bfccfea610e759ec39ba9df72d6\"" May 17 00:16:02.745939 containerd[1462]: time="2025-05-17T00:16:02.745906990Z" level=info msg="StartContainer for \"4e999453612e166e5f0e1daffc650cabae956bfccfea610e759ec39ba9df72d6\"" May 17 00:16:02.755320 containerd[1462]: time="2025-05-17T00:16:02.755117957Z" level=info msg="CreateContainer within sandbox \"eb5f2837ee18d199e466286b9d0ce9557562e042e3df85962acc85825709a6fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"569fcb5a456a62fa4b3f99daf9def6e5257c09cd55ff208b9e7f363728a78e2b\"" May 17 00:16:02.756002 containerd[1462]: time="2025-05-17T00:16:02.755805808Z" level=info msg="StartContainer for \"569fcb5a456a62fa4b3f99daf9def6e5257c09cd55ff208b9e7f363728a78e2b\"" May 17 00:16:02.756595 containerd[1462]: time="2025-05-17T00:16:02.756565292Z" level=info msg="CreateContainer within sandbox \"dd2fce181fe52b193bc026630efd50bda541428e41e3a74795ee9dc673d50cc2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9cfe0bc58762afa363a4c25bccf854b0e40b162a86690c5f85d84aa725f46e7\"" May 17 00:16:02.756888 containerd[1462]: time="2025-05-17T00:16:02.756867209Z" level=info msg="StartContainer for \"e9cfe0bc58762afa363a4c25bccf854b0e40b162a86690c5f85d84aa725f46e7\"" May 17 00:16:02.772831 systemd[1]: Started cri-containerd-4e999453612e166e5f0e1daffc650cabae956bfccfea610e759ec39ba9df72d6.scope - libcontainer container 4e999453612e166e5f0e1daffc650cabae956bfccfea610e759ec39ba9df72d6. May 17 00:16:02.787821 systemd[1]: Started cri-containerd-e9cfe0bc58762afa363a4c25bccf854b0e40b162a86690c5f85d84aa725f46e7.scope - libcontainer container e9cfe0bc58762afa363a4c25bccf854b0e40b162a86690c5f85d84aa725f46e7. May 17 00:16:02.790496 systemd[1]: Started cri-containerd-569fcb5a456a62fa4b3f99daf9def6e5257c09cd55ff208b9e7f363728a78e2b.scope - libcontainer container 569fcb5a456a62fa4b3f99daf9def6e5257c09cd55ff208b9e7f363728a78e2b. May 17 00:16:02.827866 containerd[1462]: time="2025-05-17T00:16:02.827817291Z" level=info msg="StartContainer for \"4e999453612e166e5f0e1daffc650cabae956bfccfea610e759ec39ba9df72d6\" returns successfully" May 17 00:16:02.827981 containerd[1462]: time="2025-05-17T00:16:02.827931936Z" level=info msg="StartContainer for \"e9cfe0bc58762afa363a4c25bccf854b0e40b162a86690c5f85d84aa725f46e7\" returns successfully" May 17 00:16:02.836760 containerd[1462]: time="2025-05-17T00:16:02.836274674Z" level=info msg="StartContainer for \"569fcb5a456a62fa4b3f99daf9def6e5257c09cd55ff208b9e7f363728a78e2b\" returns successfully" May 17 00:16:02.840390 kubelet[2124]: W0517 00:16:02.840305 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:02.840390 kubelet[2124]: E0517 00:16:02.840367 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:02.853759 kubelet[2124]: W0517 00:16:02.853665 2124 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused May 17 00:16:02.853802 kubelet[2124]: E0517 00:16:02.853782 2124 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" May 17 00:16:03.372493 kubelet[2124]: E0517 00:16:03.372317 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:03.372493 kubelet[2124]: E0517 00:16:03.372424 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:03.376571 kubelet[2124]: E0517 00:16:03.376149 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:03.376571 kubelet[2124]: E0517 00:16:03.376253 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:03.376834 kubelet[2124]: E0517 00:16:03.376759 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:03.376928 kubelet[2124]: E0517 00:16:03.376894 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:03.654309 kubelet[2124]: E0517 00:16:03.654178 2124 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 00:16:03.823139 kubelet[2124]: I0517 00:16:03.823095 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:16:03.832051 kubelet[2124]: I0517 00:16:03.832020 2124 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:16:03.832051 kubelet[2124]: E0517 00:16:03.832047 2124 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 17 00:16:03.839402 kubelet[2124]: E0517 00:16:03.839379 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:03.940562 kubelet[2124]: E0517 00:16:03.940444 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.040819 kubelet[2124]: E0517 00:16:04.040769 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.141419 kubelet[2124]: E0517 00:16:04.141362 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.242033 kubelet[2124]: E0517 00:16:04.241894 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.342216 kubelet[2124]: E0517 00:16:04.342175 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.377447 kubelet[2124]: E0517 00:16:04.377412 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:04.377911 kubelet[2124]: E0517 00:16:04.377490 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:04.377911 kubelet[2124]: E0517 00:16:04.377515 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:04.377911 kubelet[2124]: E0517 00:16:04.377563 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:04.442593 kubelet[2124]: E0517 00:16:04.442552 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.543260 kubelet[2124]: E0517 00:16:04.543141 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.551713 kubelet[2124]: E0517 00:16:04.551668 2124 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:16:04.551833 kubelet[2124]: E0517 00:16:04.551801 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:04.644133 kubelet[2124]: E0517 00:16:04.644084 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.744940 kubelet[2124]: E0517 00:16:04.744888 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.845913 kubelet[2124]: E0517 00:16:04.845879 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:04.946466 kubelet[2124]: E0517 00:16:04.946422 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.047081 kubelet[2124]: E0517 00:16:05.047038 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.147640 kubelet[2124]: E0517 00:16:05.147491 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.248276 kubelet[2124]: E0517 00:16:05.248238 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.348861 kubelet[2124]: E0517 00:16:05.348814 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.449856 kubelet[2124]: E0517 00:16:05.449745 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.550595 kubelet[2124]: E0517 00:16:05.550544 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.651153 kubelet[2124]: E0517 00:16:05.651109 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.751722 kubelet[2124]: E0517 00:16:05.751617 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.852505 kubelet[2124]: E0517 00:16:05.852465 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:05.932346 systemd[1]: Reloading requested from client PID 2398 ('systemctl') (unit session-7.scope)... May 17 00:16:05.932362 systemd[1]: Reloading... May 17 00:16:05.953049 kubelet[2124]: E0517 00:16:05.953011 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:06.002717 zram_generator::config[2440]: No configuration found. May 17 00:16:06.053505 kubelet[2124]: E0517 00:16:06.053464 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:06.102588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:16:06.153973 kubelet[2124]: E0517 00:16:06.153947 2124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:16:06.192321 systemd[1]: Reloading finished in 259 ms. May 17 00:16:06.233514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:16:06.255101 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:16:06.255478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:16:06.266870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:16:06.420897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:16:06.425457 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:16:06.460680 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:16:06.460680 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:16:06.460680 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:16:06.461052 kubelet[2482]: I0517 00:16:06.460754 2482 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:16:06.468432 kubelet[2482]: I0517 00:16:06.468397 2482 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:16:06.468432 kubelet[2482]: I0517 00:16:06.468424 2482 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:16:06.468689 kubelet[2482]: I0517 00:16:06.468663 2482 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:16:06.469827 kubelet[2482]: I0517 00:16:06.469809 2482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:16:06.471792 kubelet[2482]: I0517 00:16:06.471773 2482 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:16:06.474555 kubelet[2482]: E0517 00:16:06.474521 2482 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:16:06.474555 kubelet[2482]: I0517 00:16:06.474555 2482 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:16:06.478896 kubelet[2482]: I0517 00:16:06.478864 2482 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:16:06.479149 kubelet[2482]: I0517 00:16:06.479119 2482 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:16:06.479354 kubelet[2482]: I0517 00:16:06.479143 2482 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:16:06.479436 kubelet[2482]: I0517 00:16:06.479387 2482 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:16:06.479436 kubelet[2482]: I0517 00:16:06.479400 2482 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:16:06.479481 kubelet[2482]: I0517 00:16:06.479448 2482 state_mem.go:36] "Initialized new in-memory state store" May 17 00:16:06.481103 kubelet[2482]: I0517 00:16:06.479653 2482 kubelet.go:446] "Attempting to sync node with API server" May 17 00:16:06.481103 kubelet[2482]: I0517 00:16:06.479700 2482 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:16:06.481103 kubelet[2482]: I0517 00:16:06.479726 2482 kubelet.go:352] "Adding apiserver pod source" May 17 00:16:06.481103 kubelet[2482]: I0517 00:16:06.479736 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:16:06.481103 kubelet[2482]: I0517 00:16:06.480340 2482 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:16:06.481103 kubelet[2482]: I0517 00:16:06.480892 2482 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:16:06.481312 kubelet[2482]: I0517 00:16:06.481294 2482 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:16:06.481390 kubelet[2482]: I0517 00:16:06.481377 2482 server.go:1287] "Started kubelet" May 17 00:16:06.482737 kubelet[2482]: I0517 00:16:06.482697 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:16:06.483087 kubelet[2482]: I0517 00:16:06.483051 2482 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:16:06.486694 kubelet[2482]: I0517 00:16:06.483582 2482 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:16:06.486694 kubelet[2482]: I0517 00:16:06.484629 2482 server.go:479] "Adding debug handlers to kubelet server" May 17 00:16:06.487666 kubelet[2482]: I0517 00:16:06.487645 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:16:06.488107 kubelet[2482]: I0517 00:16:06.487881 2482 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:16:06.488107 kubelet[2482]: I0517 00:16:06.488054 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:16:06.489854 kubelet[2482]: E0517 00:16:06.489839 2482 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:16:06.490408 kubelet[2482]: I0517 00:16:06.490385 2482 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:16:06.490542 kubelet[2482]: I0517 00:16:06.490517 2482 reconciler.go:26] "Reconciler: start to sync state" May 17 00:16:06.491379 kubelet[2482]: I0517 00:16:06.491353 2482 factory.go:221] Registration of the systemd container factory successfully May 17 00:16:06.491460 kubelet[2482]: I0517 00:16:06.491441 2482 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:16:06.492917 kubelet[2482]: I0517 00:16:06.492810 2482 factory.go:221] Registration of the containerd container factory successfully May 17 00:16:06.499438 kubelet[2482]: I0517 00:16:06.499397 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:16:06.500775 kubelet[2482]: I0517 00:16:06.500723 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:16:06.500775 kubelet[2482]: I0517 00:16:06.500772 2482 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:16:06.500832 kubelet[2482]: I0517 00:16:06.500789 2482 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:16:06.500832 kubelet[2482]: I0517 00:16:06.500799 2482 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:16:06.500890 kubelet[2482]: E0517 00:16:06.500844 2482 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:16:06.545352 kubelet[2482]: I0517 00:16:06.545292 2482 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:16:06.545352 kubelet[2482]: I0517 00:16:06.545312 2482 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:16:06.545352 kubelet[2482]: I0517 00:16:06.545332 2482 state_mem.go:36] "Initialized new in-memory state store" May 17 00:16:06.545581 kubelet[2482]: I0517 00:16:06.545493 2482 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:16:06.545581 kubelet[2482]: I0517 00:16:06.545504 2482 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:16:06.545581 kubelet[2482]: I0517 00:16:06.545521 2482 policy_none.go:49] "None policy: Start" May 17 00:16:06.545581 kubelet[2482]: I0517 00:16:06.545541 2482 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:16:06.545581 kubelet[2482]: I0517 00:16:06.545550 2482 state_mem.go:35] "Initializing new in-memory state store" May 17 00:16:06.545806 kubelet[2482]: I0517 00:16:06.545644 2482 state_mem.go:75] "Updated machine memory state" May 17 00:16:06.549733 kubelet[2482]: I0517 00:16:06.549713 2482 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:16:06.550037 kubelet[2482]: I0517 00:16:06.549950 2482 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:16:06.550072 kubelet[2482]: I0517 00:16:06.549973 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:16:06.550283 kubelet[2482]: I0517 00:16:06.550260 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:16:06.551493 kubelet[2482]: E0517 00:16:06.551406 2482 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:16:06.602309 kubelet[2482]: I0517 00:16:06.602234 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:16:06.602428 kubelet[2482]: I0517 00:16:06.602336 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:16:06.602552 kubelet[2482]: I0517 00:16:06.602522 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:16:06.659018 kubelet[2482]: I0517 00:16:06.658979 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:16:06.664179 kubelet[2482]: I0517 00:16:06.664155 2482 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 17 00:16:06.664308 kubelet[2482]: I0517 00:16:06.664232 2482 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:16:06.791882 kubelet[2482]: I0517 00:16:06.791766 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36abe138c9ef396f5d9d69068b14ee89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36abe138c9ef396f5d9d69068b14ee89\") " pod="kube-system/kube-apiserver-localhost" May 17 00:16:06.791882 kubelet[2482]: I0517 00:16:06.791800 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36abe138c9ef396f5d9d69068b14ee89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36abe138c9ef396f5d9d69068b14ee89\") " pod="kube-system/kube-apiserver-localhost" May 17 00:16:06.791882 kubelet[2482]: I0517 00:16:06.791819 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36abe138c9ef396f5d9d69068b14ee89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36abe138c9ef396f5d9d69068b14ee89\") " pod="kube-system/kube-apiserver-localhost" May 17 00:16:06.791882 kubelet[2482]: I0517 00:16:06.791852 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:06.791882 kubelet[2482]: I0517 00:16:06.791867 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:06.792091 kubelet[2482]: I0517 00:16:06.791903 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:06.792091 kubelet[2482]: I0517 00:16:06.791930 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:06.792091 kubelet[2482]: I0517 00:16:06.791957 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:16:06.792091 kubelet[2482]: I0517 00:16:06.791976 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:16:06.907972 kubelet[2482]: E0517 00:16:06.907941 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:06.910130 kubelet[2482]: E0517 00:16:06.910103 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:06.910189 kubelet[2482]: E0517 00:16:06.910165 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:07.482418 kubelet[2482]: I0517 00:16:07.482383 2482 apiserver.go:52] "Watching apiserver" May 17 00:16:07.491208 kubelet[2482]: I0517 00:16:07.491171 2482 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:16:07.533970 kubelet[2482]: E0517 00:16:07.533923 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:07.534305 kubelet[2482]: E0517 00:16:07.534056 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:07.534524 kubelet[2482]: I0517 00:16:07.534410 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:16:07.538972 kubelet[2482]: E0517 00:16:07.538939 2482 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:16:07.539091 kubelet[2482]: E0517 00:16:07.539075 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:07.556759 kubelet[2482]: I0517 00:16:07.556341 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.556324348 podStartE2EDuration="1.556324348s" podCreationTimestamp="2025-05-17 00:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:16:07.549808617 +0000 UTC m=+1.120491373" watchObservedRunningTime="2025-05-17 00:16:07.556324348 +0000 UTC m=+1.127007104" May 17 00:16:07.556759 kubelet[2482]: I0517 00:16:07.556478 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.556472105 podStartE2EDuration="1.556472105s" podCreationTimestamp="2025-05-17 00:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:16:07.556262532 +0000 UTC m=+1.126945288" watchObservedRunningTime="2025-05-17 00:16:07.556472105 +0000 UTC m=+1.127154861" May 17 00:16:07.567767 kubelet[2482]: I0517 00:16:07.567721 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5677070990000002 podStartE2EDuration="1.567707099s" podCreationTimestamp="2025-05-17 00:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:16:07.561402564 +0000 UTC m=+1.132085320" watchObservedRunningTime="2025-05-17 00:16:07.567707099 +0000 UTC m=+1.138389855" May 17 00:16:08.536467 kubelet[2482]: E0517 00:16:08.535304 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:08.536467 kubelet[2482]: E0517 00:16:08.536275 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:09.937199 kubelet[2482]: E0517 00:16:09.937157 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:10.943852 kubelet[2482]: E0517 00:16:10.943821 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:12.275698 kubelet[2482]: I0517 00:16:12.275657 2482 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:16:12.276132 containerd[1462]: time="2025-05-17T00:16:12.275990161Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:16:12.276368 kubelet[2482]: I0517 00:16:12.276166 2482 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:16:12.923397 systemd[1]: Created slice kubepods-besteffort-pod822fdca6_5901_42b8_85ec_f71a48d1a3f5.slice - libcontainer container kubepods-besteffort-pod822fdca6_5901_42b8_85ec_f71a48d1a3f5.slice. May 17 00:16:12.935645 kubelet[2482]: I0517 00:16:12.935616 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822fdca6-5901-42b8-85ec-f71a48d1a3f5-lib-modules\") pod \"kube-proxy-pp7f9\" (UID: \"822fdca6-5901-42b8-85ec-f71a48d1a3f5\") " pod="kube-system/kube-proxy-pp7f9" May 17 00:16:12.935757 kubelet[2482]: I0517 00:16:12.935649 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv6p6\" (UniqueName: \"kubernetes.io/projected/822fdca6-5901-42b8-85ec-f71a48d1a3f5-kube-api-access-jv6p6\") pod \"kube-proxy-pp7f9\" (UID: \"822fdca6-5901-42b8-85ec-f71a48d1a3f5\") " pod="kube-system/kube-proxy-pp7f9" May 17 00:16:12.935757 kubelet[2482]: I0517 00:16:12.935684 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/822fdca6-5901-42b8-85ec-f71a48d1a3f5-kube-proxy\") pod \"kube-proxy-pp7f9\" (UID: \"822fdca6-5901-42b8-85ec-f71a48d1a3f5\") " pod="kube-system/kube-proxy-pp7f9" May 17 00:16:12.935757 kubelet[2482]: I0517 00:16:12.935702 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/822fdca6-5901-42b8-85ec-f71a48d1a3f5-xtables-lock\") pod \"kube-proxy-pp7f9\" (UID: \"822fdca6-5901-42b8-85ec-f71a48d1a3f5\") " pod="kube-system/kube-proxy-pp7f9" May 17 00:16:13.233253 kubelet[2482]: E0517 00:16:13.233142 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:13.234975 containerd[1462]: time="2025-05-17T00:16:13.234943206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp7f9,Uid:822fdca6-5901-42b8-85ec-f71a48d1a3f5,Namespace:kube-system,Attempt:0,}" May 17 00:16:13.240807 systemd[1]: Created slice kubepods-besteffort-pod9f1fa9c2_dfa6_4f4f_bc40_1afffb03c141.slice - libcontainer container kubepods-besteffort-pod9f1fa9c2_dfa6_4f4f_bc40_1afffb03c141.slice. May 17 00:16:13.259632 containerd[1462]: time="2025-05-17T00:16:13.259543622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:13.259632 containerd[1462]: time="2025-05-17T00:16:13.259607865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:13.259632 containerd[1462]: time="2025-05-17T00:16:13.259620538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:13.259850 containerd[1462]: time="2025-05-17T00:16:13.259723716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:13.274251 systemd[1]: run-containerd-runc-k8s.io-f4c949b6c32592fd1d2b20eb97cb2d1f9457ef9cc8bc7c166aed9d266d8798b3-runc.PKED3O.mount: Deactivated successfully. May 17 00:16:13.284848 systemd[1]: Started cri-containerd-f4c949b6c32592fd1d2b20eb97cb2d1f9457ef9cc8bc7c166aed9d266d8798b3.scope - libcontainer container f4c949b6c32592fd1d2b20eb97cb2d1f9457ef9cc8bc7c166aed9d266d8798b3. May 17 00:16:13.306406 containerd[1462]: time="2025-05-17T00:16:13.306358705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp7f9,Uid:822fdca6-5901-42b8-85ec-f71a48d1a3f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4c949b6c32592fd1d2b20eb97cb2d1f9457ef9cc8bc7c166aed9d266d8798b3\"" May 17 00:16:13.307005 kubelet[2482]: E0517 00:16:13.306984 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:13.310130 containerd[1462]: time="2025-05-17T00:16:13.310098520Z" level=info msg="CreateContainer within sandbox \"f4c949b6c32592fd1d2b20eb97cb2d1f9457ef9cc8bc7c166aed9d266d8798b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:16:13.326317 containerd[1462]: time="2025-05-17T00:16:13.326264550Z" level=info msg="CreateContainer within sandbox \"f4c949b6c32592fd1d2b20eb97cb2d1f9457ef9cc8bc7c166aed9d266d8798b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6d866f9f744a307b168d93abce8a77797af8e606b5521cff1006492bd66330c\"" May 17 00:16:13.326803 containerd[1462]: time="2025-05-17T00:16:13.326774805Z" level=info msg="StartContainer for \"b6d866f9f744a307b168d93abce8a77797af8e606b5521cff1006492bd66330c\"" May 17 00:16:13.338902 kubelet[2482]: I0517 00:16:13.338850 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f1fa9c2-dfa6-4f4f-bc40-1afffb03c141-var-lib-calico\") pod \"tigera-operator-844669ff44-jl7pk\" (UID: \"9f1fa9c2-dfa6-4f4f-bc40-1afffb03c141\") " pod="tigera-operator/tigera-operator-844669ff44-jl7pk" May 17 00:16:13.338902 kubelet[2482]: I0517 00:16:13.338899 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lf8\" (UniqueName: \"kubernetes.io/projected/9f1fa9c2-dfa6-4f4f-bc40-1afffb03c141-kube-api-access-t6lf8\") pod \"tigera-operator-844669ff44-jl7pk\" (UID: \"9f1fa9c2-dfa6-4f4f-bc40-1afffb03c141\") " pod="tigera-operator/tigera-operator-844669ff44-jl7pk" May 17 00:16:13.363813 systemd[1]: Started cri-containerd-b6d866f9f744a307b168d93abce8a77797af8e606b5521cff1006492bd66330c.scope - libcontainer container b6d866f9f744a307b168d93abce8a77797af8e606b5521cff1006492bd66330c. May 17 00:16:13.392286 containerd[1462]: time="2025-05-17T00:16:13.392242860Z" level=info msg="StartContainer for \"b6d866f9f744a307b168d93abce8a77797af8e606b5521cff1006492bd66330c\" returns successfully" May 17 00:16:13.543107 containerd[1462]: time="2025-05-17T00:16:13.542985785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-jl7pk,Uid:9f1fa9c2-dfa6-4f4f-bc40-1afffb03c141,Namespace:tigera-operator,Attempt:0,}" May 17 00:16:13.543276 kubelet[2482]: E0517 00:16:13.543005 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:13.551926 kubelet[2482]: I0517 00:16:13.551852 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pp7f9" podStartSLOduration=1.551833351 podStartE2EDuration="1.551833351s" podCreationTimestamp="2025-05-17 00:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:16:13.551098126 +0000 UTC m=+7.121780882" watchObservedRunningTime="2025-05-17 00:16:13.551833351 +0000 UTC m=+7.122516107" May 17 00:16:13.568006 containerd[1462]: time="2025-05-17T00:16:13.567911282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:13.568006 containerd[1462]: time="2025-05-17T00:16:13.567976767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:13.568006 containerd[1462]: time="2025-05-17T00:16:13.567988510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:13.568922 containerd[1462]: time="2025-05-17T00:16:13.568292210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:13.588805 systemd[1]: Started cri-containerd-7e57cfb63cf262ce148830dbee463d696067d43bfbc94fd2e079410756f39ca0.scope - libcontainer container 7e57cfb63cf262ce148830dbee463d696067d43bfbc94fd2e079410756f39ca0. May 17 00:16:13.628380 containerd[1462]: time="2025-05-17T00:16:13.628307526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-jl7pk,Uid:9f1fa9c2-dfa6-4f4f-bc40-1afffb03c141,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7e57cfb63cf262ce148830dbee463d696067d43bfbc94fd2e079410756f39ca0\"" May 17 00:16:13.630219 containerd[1462]: time="2025-05-17T00:16:13.630186476Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:16:14.979640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1634786056.mount: Deactivated successfully. May 17 00:16:15.990419 containerd[1462]: time="2025-05-17T00:16:15.990371253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:15.991346 containerd[1462]: time="2025-05-17T00:16:15.991310043Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:16:15.992818 containerd[1462]: time="2025-05-17T00:16:15.992766901Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:15.995070 containerd[1462]: time="2025-05-17T00:16:15.995035656Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:15.995770 containerd[1462]: time="2025-05-17T00:16:15.995722846Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.365507304s" May 17 00:16:15.995802 containerd[1462]: time="2025-05-17T00:16:15.995766891Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:16:15.997358 containerd[1462]: time="2025-05-17T00:16:15.997330471Z" level=info msg="CreateContainer within sandbox \"7e57cfb63cf262ce148830dbee463d696067d43bfbc94fd2e079410756f39ca0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:16:16.013220 containerd[1462]: time="2025-05-17T00:16:16.013181518Z" level=info msg="CreateContainer within sandbox \"7e57cfb63cf262ce148830dbee463d696067d43bfbc94fd2e079410756f39ca0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"327c2b28a6f34c1bd231f3bfc047953b0685e7ea618778696ae78280d19588d1\"" May 17 00:16:16.013648 containerd[1462]: time="2025-05-17T00:16:16.013618901Z" level=info msg="StartContainer for \"327c2b28a6f34c1bd231f3bfc047953b0685e7ea618778696ae78280d19588d1\"" May 17 00:16:16.042831 systemd[1]: Started cri-containerd-327c2b28a6f34c1bd231f3bfc047953b0685e7ea618778696ae78280d19588d1.scope - libcontainer container 327c2b28a6f34c1bd231f3bfc047953b0685e7ea618778696ae78280d19588d1. May 17 00:16:16.069416 containerd[1462]: time="2025-05-17T00:16:16.069376437Z" level=info msg="StartContainer for \"327c2b28a6f34c1bd231f3bfc047953b0685e7ea618778696ae78280d19588d1\" returns successfully" May 17 00:16:16.557471 kubelet[2482]: I0517 00:16:16.557351 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-jl7pk" podStartSLOduration=1.19056039 podStartE2EDuration="3.557331637s" podCreationTimestamp="2025-05-17 00:16:13 +0000 UTC" firstStartedPulling="2025-05-17 00:16:13.629609113 +0000 UTC m=+7.200291879" lastFinishedPulling="2025-05-17 00:16:15.99638037 +0000 UTC m=+9.567063126" observedRunningTime="2025-05-17 00:16:16.556931194 +0000 UTC m=+10.127613950" watchObservedRunningTime="2025-05-17 00:16:16.557331637 +0000 UTC m=+10.128014393" May 17 00:16:16.584907 kubelet[2482]: E0517 00:16:16.584848 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:17.553148 kubelet[2482]: E0517 00:16:17.553102 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:19.942054 kubelet[2482]: E0517 00:16:19.941973 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:20.555379 kubelet[2482]: E0517 00:16:20.555344 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:20.948625 kubelet[2482]: E0517 00:16:20.948440 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:21.204532 sudo[1642]: pam_unix(sudo:session): session closed for user root May 17 00:16:21.206371 sshd[1639]: pam_unix(sshd:session): session closed for user core May 17 00:16:21.209626 systemd[1]: sshd@6-10.0.0.66:22-10.0.0.1:54388.service: Deactivated successfully. May 17 00:16:21.211974 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:16:21.212235 systemd[1]: session-7.scope: Consumed 4.507s CPU time, 158.4M memory peak, 0B memory swap peak. May 17 00:16:21.214353 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. May 17 00:16:21.215829 systemd-logind[1446]: Removed session 7. May 17 00:16:23.508596 systemd[1]: Created slice kubepods-besteffort-pod22a86fa6_2b25_4128_aa53_a30228468ba3.slice - libcontainer container kubepods-besteffort-pod22a86fa6_2b25_4128_aa53_a30228468ba3.slice. May 17 00:16:23.607316 kubelet[2482]: I0517 00:16:23.607269 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42qc\" (UniqueName: \"kubernetes.io/projected/22a86fa6-2b25-4128-aa53-a30228468ba3-kube-api-access-n42qc\") pod \"calico-typha-747fc8547c-2qc88\" (UID: \"22a86fa6-2b25-4128-aa53-a30228468ba3\") " pod="calico-system/calico-typha-747fc8547c-2qc88" May 17 00:16:23.607316 kubelet[2482]: I0517 00:16:23.607315 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22a86fa6-2b25-4128-aa53-a30228468ba3-tigera-ca-bundle\") pod \"calico-typha-747fc8547c-2qc88\" (UID: \"22a86fa6-2b25-4128-aa53-a30228468ba3\") " pod="calico-system/calico-typha-747fc8547c-2qc88" May 17 00:16:23.607790 kubelet[2482]: I0517 00:16:23.607336 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/22a86fa6-2b25-4128-aa53-a30228468ba3-typha-certs\") pod \"calico-typha-747fc8547c-2qc88\" (UID: \"22a86fa6-2b25-4128-aa53-a30228468ba3\") " pod="calico-system/calico-typha-747fc8547c-2qc88" May 17 00:16:23.812437 kubelet[2482]: E0517 00:16:23.812397 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:23.812985 containerd[1462]: time="2025-05-17T00:16:23.812930573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747fc8547c-2qc88,Uid:22a86fa6-2b25-4128-aa53-a30228468ba3,Namespace:calico-system,Attempt:0,}" May 17 00:16:24.073397 systemd[1]: Created slice kubepods-besteffort-pod0852bc39_90ca_4545_baf9_48e733666ba5.slice - libcontainer container kubepods-besteffort-pod0852bc39_90ca_4545_baf9_48e733666ba5.slice. May 17 00:16:24.089620 containerd[1462]: time="2025-05-17T00:16:24.089484386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:24.089620 containerd[1462]: time="2025-05-17T00:16:24.089565048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:24.089868 containerd[1462]: time="2025-05-17T00:16:24.089581149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:24.089868 containerd[1462]: time="2025-05-17T00:16:24.089667713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:24.110827 kubelet[2482]: I0517 00:16:24.110778 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0852bc39-90ca-4545-baf9-48e733666ba5-node-certs\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.110827 kubelet[2482]: I0517 00:16:24.110817 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sfz7\" (UniqueName: \"kubernetes.io/projected/0852bc39-90ca-4545-baf9-48e733666ba5-kube-api-access-8sfz7\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.110827 kubelet[2482]: I0517 00:16:24.110835 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-xtables-lock\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111017 kubelet[2482]: I0517 00:16:24.110849 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-flexvol-driver-host\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111017 kubelet[2482]: I0517 00:16:24.110865 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-policysync\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111017 kubelet[2482]: I0517 00:16:24.110881 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-var-run-calico\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111017 kubelet[2482]: I0517 00:16:24.110925 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-cni-bin-dir\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111017 kubelet[2482]: I0517 00:16:24.110953 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-lib-modules\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111129 kubelet[2482]: I0517 00:16:24.110971 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-var-lib-calico\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111129 kubelet[2482]: I0517 00:16:24.111008 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-cni-log-dir\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111129 kubelet[2482]: I0517 00:16:24.111073 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0852bc39-90ca-4545-baf9-48e733666ba5-tigera-ca-bundle\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.111129 kubelet[2482]: I0517 00:16:24.111121 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0852bc39-90ca-4545-baf9-48e733666ba5-cni-net-dir\") pod \"calico-node-l7bxb\" (UID: \"0852bc39-90ca-4545-baf9-48e733666ba5\") " pod="calico-system/calico-node-l7bxb" May 17 00:16:24.115802 systemd[1]: Started cri-containerd-9c46011fa32ade320f40af754611ff4006f2393966cd3366f3c1411f3650421f.scope - libcontainer container 9c46011fa32ade320f40af754611ff4006f2393966cd3366f3c1411f3650421f. May 17 00:16:24.155671 containerd[1462]: time="2025-05-17T00:16:24.155630888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747fc8547c-2qc88,Uid:22a86fa6-2b25-4128-aa53-a30228468ba3,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c46011fa32ade320f40af754611ff4006f2393966cd3366f3c1411f3650421f\"" May 17 00:16:24.157011 kubelet[2482]: E0517 00:16:24.156899 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:24.158461 containerd[1462]: time="2025-05-17T00:16:24.158429386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:16:24.199738 update_engine[1451]: I20250517 00:16:24.199652 1451 update_attempter.cc:509] Updating boot flags... May 17 00:16:24.213109 kubelet[2482]: E0517 00:16:24.213011 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:24.221325 kubelet[2482]: E0517 00:16:24.221218 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.221325 kubelet[2482]: W0517 00:16:24.221246 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.221325 kubelet[2482]: E0517 00:16:24.221283 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.228217 kubelet[2482]: E0517 00:16:24.228125 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.228217 kubelet[2482]: W0517 00:16:24.228148 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.228217 kubelet[2482]: E0517 00:16:24.228169 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.235716 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2942) May 17 00:16:24.289725 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2943) May 17 00:16:24.295871 kubelet[2482]: E0517 00:16:24.295812 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.296092 kubelet[2482]: W0517 00:16:24.295843 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.296092 kubelet[2482]: E0517 00:16:24.295982 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.296538 kubelet[2482]: E0517 00:16:24.296446 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.296538 kubelet[2482]: W0517 00:16:24.296458 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.296538 kubelet[2482]: E0517 00:16:24.296468 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.297148 kubelet[2482]: E0517 00:16:24.296857 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.297148 kubelet[2482]: W0517 00:16:24.296886 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.297148 kubelet[2482]: E0517 00:16:24.296896 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.297414 kubelet[2482]: E0517 00:16:24.297383 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.297554 kubelet[2482]: W0517 00:16:24.297457 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.297554 kubelet[2482]: E0517 00:16:24.297480 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.298141 kubelet[2482]: E0517 00:16:24.298074 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.298141 kubelet[2482]: W0517 00:16:24.298087 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.298141 kubelet[2482]: E0517 00:16:24.298096 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.299043 kubelet[2482]: E0517 00:16:24.299030 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.299410 kubelet[2482]: W0517 00:16:24.299394 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.299894 kubelet[2482]: E0517 00:16:24.299816 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.301121 kubelet[2482]: E0517 00:16:24.300730 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.301121 kubelet[2482]: W0517 00:16:24.300741 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.301121 kubelet[2482]: E0517 00:16:24.300751 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.301121 kubelet[2482]: E0517 00:16:24.300933 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.301121 kubelet[2482]: W0517 00:16:24.300940 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.301121 kubelet[2482]: E0517 00:16:24.300948 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.301356 kubelet[2482]: E0517 00:16:24.301164 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.301356 kubelet[2482]: W0517 00:16:24.301172 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.301356 kubelet[2482]: E0517 00:16:24.301181 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.301634 kubelet[2482]: E0517 00:16:24.301623 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.302072 kubelet[2482]: W0517 00:16:24.302019 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.302072 kubelet[2482]: E0517 00:16:24.302035 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.302306 kubelet[2482]: E0517 00:16:24.302296 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.302908 kubelet[2482]: W0517 00:16:24.302361 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.302908 kubelet[2482]: E0517 00:16:24.302374 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.303222 kubelet[2482]: E0517 00:16:24.303125 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.303222 kubelet[2482]: W0517 00:16:24.303136 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.303222 kubelet[2482]: E0517 00:16:24.303145 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.303415 kubelet[2482]: E0517 00:16:24.303405 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.303525 kubelet[2482]: W0517 00:16:24.303463 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.303525 kubelet[2482]: E0517 00:16:24.303476 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.303830 kubelet[2482]: E0517 00:16:24.303819 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.303890 kubelet[2482]: W0517 00:16:24.303880 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.303939 kubelet[2482]: E0517 00:16:24.303929 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.304586 kubelet[2482]: E0517 00:16:24.304411 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.304586 kubelet[2482]: W0517 00:16:24.304423 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.304586 kubelet[2482]: E0517 00:16:24.304432 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.304730 kubelet[2482]: E0517 00:16:24.304720 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.304838 kubelet[2482]: W0517 00:16:24.304780 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.304838 kubelet[2482]: E0517 00:16:24.304792 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.305107 kubelet[2482]: E0517 00:16:24.305096 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.305166 kubelet[2482]: W0517 00:16:24.305155 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.305217 kubelet[2482]: E0517 00:16:24.305207 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.305975 kubelet[2482]: E0517 00:16:24.305964 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.306079 kubelet[2482]: W0517 00:16:24.306029 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.306079 kubelet[2482]: E0517 00:16:24.306042 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.306387 kubelet[2482]: E0517 00:16:24.306299 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.306387 kubelet[2482]: W0517 00:16:24.306309 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.306387 kubelet[2482]: E0517 00:16:24.306317 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.306617 kubelet[2482]: E0517 00:16:24.306538 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.306617 kubelet[2482]: W0517 00:16:24.306548 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.306617 kubelet[2482]: E0517 00:16:24.306557 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.315914 kubelet[2482]: E0517 00:16:24.314793 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.315914 kubelet[2482]: W0517 00:16:24.314816 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.315914 kubelet[2482]: E0517 00:16:24.314835 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.315914 kubelet[2482]: I0517 00:16:24.314874 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be1c8442-ea83-4a8e-9428-f2f62d4e4acf-socket-dir\") pod \"csi-node-driver-9cgfz\" (UID: \"be1c8442-ea83-4a8e-9428-f2f62d4e4acf\") " pod="calico-system/csi-node-driver-9cgfz" May 17 00:16:24.315914 kubelet[2482]: E0517 00:16:24.315124 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.315914 kubelet[2482]: W0517 00:16:24.315133 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.315914 kubelet[2482]: E0517 00:16:24.315155 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.315914 kubelet[2482]: I0517 00:16:24.315172 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be1c8442-ea83-4a8e-9428-f2f62d4e4acf-kubelet-dir\") pod \"csi-node-driver-9cgfz\" (UID: \"be1c8442-ea83-4a8e-9428-f2f62d4e4acf\") " pod="calico-system/csi-node-driver-9cgfz" May 17 00:16:24.315914 kubelet[2482]: E0517 00:16:24.315399 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.316220 kubelet[2482]: W0517 00:16:24.315410 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.316220 kubelet[2482]: E0517 00:16:24.315439 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.316220 kubelet[2482]: I0517 00:16:24.315458 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/be1c8442-ea83-4a8e-9428-f2f62d4e4acf-varrun\") pod \"csi-node-driver-9cgfz\" (UID: \"be1c8442-ea83-4a8e-9428-f2f62d4e4acf\") " pod="calico-system/csi-node-driver-9cgfz" May 17 00:16:24.316220 kubelet[2482]: E0517 00:16:24.315792 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.316220 kubelet[2482]: W0517 00:16:24.315801 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.316220 kubelet[2482]: E0517 00:16:24.315813 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.316696 kubelet[2482]: E0517 00:16:24.316460 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.316696 kubelet[2482]: W0517 00:16:24.316471 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.316696 kubelet[2482]: E0517 00:16:24.316573 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.316944 kubelet[2482]: E0517 00:16:24.316933 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.316997 kubelet[2482]: W0517 00:16:24.316987 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.317140 kubelet[2482]: E0517 00:16:24.317128 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.317252 kubelet[2482]: E0517 00:16:24.317239 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.317326 kubelet[2482]: W0517 00:16:24.317313 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.317445 kubelet[2482]: E0517 00:16:24.317423 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.321708 kubelet[2482]: E0517 00:16:24.319154 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.321797 kubelet[2482]: W0517 00:16:24.321784 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.322790 kubelet[2482]: E0517 00:16:24.322734 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.322790 kubelet[2482]: I0517 00:16:24.322771 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nbj2\" (UniqueName: \"kubernetes.io/projected/be1c8442-ea83-4a8e-9428-f2f62d4e4acf-kube-api-access-7nbj2\") pod \"csi-node-driver-9cgfz\" (UID: \"be1c8442-ea83-4a8e-9428-f2f62d4e4acf\") " pod="calico-system/csi-node-driver-9cgfz" May 17 00:16:24.325978 kubelet[2482]: E0517 00:16:24.322853 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.325978 kubelet[2482]: W0517 00:16:24.322879 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.325978 kubelet[2482]: E0517 00:16:24.322904 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.325978 kubelet[2482]: E0517 00:16:24.323149 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.325978 kubelet[2482]: W0517 00:16:24.323157 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.325978 kubelet[2482]: E0517 00:16:24.323166 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.325978 kubelet[2482]: I0517 00:16:24.323188 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be1c8442-ea83-4a8e-9428-f2f62d4e4acf-registration-dir\") pod \"csi-node-driver-9cgfz\" (UID: \"be1c8442-ea83-4a8e-9428-f2f62d4e4acf\") " pod="calico-system/csi-node-driver-9cgfz" May 17 00:16:24.325978 kubelet[2482]: E0517 00:16:24.325742 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.325978 kubelet[2482]: W0517 00:16:24.325754 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.326193 kubelet[2482]: E0517 00:16:24.325765 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.326193 kubelet[2482]: E0517 00:16:24.326051 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.326193 kubelet[2482]: W0517 00:16:24.326065 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.326193 kubelet[2482]: E0517 00:16:24.326082 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.326326 kubelet[2482]: E0517 00:16:24.326306 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.326326 kubelet[2482]: W0517 00:16:24.326320 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.326371 kubelet[2482]: E0517 00:16:24.326339 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.327726 kubelet[2482]: E0517 00:16:24.326913 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.327726 kubelet[2482]: W0517 00:16:24.326926 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.327726 kubelet[2482]: E0517 00:16:24.326935 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.327830 kubelet[2482]: E0517 00:16:24.327771 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.327830 kubelet[2482]: W0517 00:16:24.327782 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.327830 kubelet[2482]: E0517 00:16:24.327794 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.377979 containerd[1462]: time="2025-05-17T00:16:24.377852973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l7bxb,Uid:0852bc39-90ca-4545-baf9-48e733666ba5,Namespace:calico-system,Attempt:0,}" May 17 00:16:24.405043 containerd[1462]: time="2025-05-17T00:16:24.404827525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:24.405043 containerd[1462]: time="2025-05-17T00:16:24.404900443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:24.405043 containerd[1462]: time="2025-05-17T00:16:24.404916544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:24.405226 containerd[1462]: time="2025-05-17T00:16:24.405037042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:24.427011 kubelet[2482]: E0517 00:16:24.426970 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.427011 kubelet[2482]: W0517 00:16:24.426996 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.427011 kubelet[2482]: E0517 00:16:24.427016 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.427290 kubelet[2482]: E0517 00:16:24.427271 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.427290 kubelet[2482]: W0517 00:16:24.427287 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.427343 kubelet[2482]: E0517 00:16:24.427309 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.427620 kubelet[2482]: E0517 00:16:24.427604 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.427620 kubelet[2482]: W0517 00:16:24.427616 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.427696 kubelet[2482]: E0517 00:16:24.427634 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.427922 kubelet[2482]: E0517 00:16:24.427907 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.427922 kubelet[2482]: W0517 00:16:24.427919 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.427978 kubelet[2482]: E0517 00:16:24.427940 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.428201 kubelet[2482]: E0517 00:16:24.428186 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.428201 kubelet[2482]: W0517 00:16:24.428198 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.428250 kubelet[2482]: E0517 00:16:24.428219 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.428489 kubelet[2482]: E0517 00:16:24.428474 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.428489 kubelet[2482]: W0517 00:16:24.428485 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.428593 kubelet[2482]: E0517 00:16:24.428564 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.428766 kubelet[2482]: E0517 00:16:24.428752 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.428766 kubelet[2482]: W0517 00:16:24.428763 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.428818 kubelet[2482]: E0517 00:16:24.428795 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.428972 kubelet[2482]: E0517 00:16:24.428958 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.428972 kubelet[2482]: W0517 00:16:24.428969 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.429020 kubelet[2482]: E0517 00:16:24.429001 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.429193 kubelet[2482]: E0517 00:16:24.429177 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.429193 kubelet[2482]: W0517 00:16:24.429189 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.429245 kubelet[2482]: E0517 00:16:24.429216 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.429403 kubelet[2482]: E0517 00:16:24.429374 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.429403 kubelet[2482]: W0517 00:16:24.429383 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.429464 kubelet[2482]: E0517 00:16:24.429411 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.429716 kubelet[2482]: E0517 00:16:24.429700 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.429716 kubelet[2482]: W0517 00:16:24.429713 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.429775 kubelet[2482]: E0517 00:16:24.429731 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.429940 kubelet[2482]: E0517 00:16:24.429925 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.429940 kubelet[2482]: W0517 00:16:24.429937 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.429988 kubelet[2482]: E0517 00:16:24.429950 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.430258 kubelet[2482]: E0517 00:16:24.430231 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.430258 kubelet[2482]: W0517 00:16:24.430253 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.430304 kubelet[2482]: E0517 00:16:24.430273 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.430496 kubelet[2482]: E0517 00:16:24.430476 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.430496 kubelet[2482]: W0517 00:16:24.430488 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.430610 kubelet[2482]: E0517 00:16:24.430589 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.430768 kubelet[2482]: E0517 00:16:24.430749 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.430768 kubelet[2482]: W0517 00:16:24.430760 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.430854 kubelet[2482]: E0517 00:16:24.430839 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.430957 kubelet[2482]: E0517 00:16:24.430945 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.430957 kubelet[2482]: W0517 00:16:24.430954 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.431004 kubelet[2482]: E0517 00:16:24.430995 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.431176 kubelet[2482]: E0517 00:16:24.431162 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.431176 kubelet[2482]: W0517 00:16:24.431173 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.431270 kubelet[2482]: E0517 00:16:24.431256 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.431461 kubelet[2482]: E0517 00:16:24.431442 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.431461 kubelet[2482]: W0517 00:16:24.431453 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.431508 kubelet[2482]: E0517 00:16:24.431490 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.431707 kubelet[2482]: E0517 00:16:24.431671 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.431707 kubelet[2482]: W0517 00:16:24.431705 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.431776 kubelet[2482]: E0517 00:16:24.431714 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.431912 systemd[1]: Started cri-containerd-b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546.scope - libcontainer container b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546. May 17 00:16:24.432590 kubelet[2482]: E0517 00:16:24.432212 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.432590 kubelet[2482]: W0517 00:16:24.432224 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.432590 kubelet[2482]: E0517 00:16:24.432242 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.432779 kubelet[2482]: E0517 00:16:24.432749 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.432779 kubelet[2482]: W0517 00:16:24.432767 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.432844 kubelet[2482]: E0517 00:16:24.432783 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.433074 kubelet[2482]: E0517 00:16:24.433057 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.433074 kubelet[2482]: W0517 00:16:24.433072 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.433133 kubelet[2482]: E0517 00:16:24.433106 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.433418 kubelet[2482]: E0517 00:16:24.433393 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.433454 kubelet[2482]: W0517 00:16:24.433430 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.433482 kubelet[2482]: E0517 00:16:24.433454 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.433911 kubelet[2482]: E0517 00:16:24.433848 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.433911 kubelet[2482]: W0517 00:16:24.433864 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.433911 kubelet[2482]: E0517 00:16:24.433885 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.434602 kubelet[2482]: E0517 00:16:24.434136 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.434602 kubelet[2482]: W0517 00:16:24.434164 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.434602 kubelet[2482]: E0517 00:16:24.434175 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.440843 kubelet[2482]: E0517 00:16:24.440808 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:24.440843 kubelet[2482]: W0517 00:16:24.440829 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:24.440843 kubelet[2482]: E0517 00:16:24.440846 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:24.461413 containerd[1462]: time="2025-05-17T00:16:24.461374867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l7bxb,Uid:0852bc39-90ca-4545-baf9-48e733666ba5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546\"" May 17 00:16:25.501171 kubelet[2482]: E0517 00:16:25.501104 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:25.851654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637741053.mount: Deactivated successfully. May 17 00:16:26.720613 containerd[1462]: time="2025-05-17T00:16:26.720556580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:26.721256 containerd[1462]: time="2025-05-17T00:16:26.721222639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:16:26.722546 containerd[1462]: time="2025-05-17T00:16:26.722469146Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:26.724299 containerd[1462]: time="2025-05-17T00:16:26.724268558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:26.724883 containerd[1462]: time="2025-05-17T00:16:26.724852242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.566384304s" May 17 00:16:26.724883 containerd[1462]: time="2025-05-17T00:16:26.724881206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:16:26.731392 containerd[1462]: time="2025-05-17T00:16:26.731361761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:16:26.750561 containerd[1462]: time="2025-05-17T00:16:26.750498105Z" level=info msg="CreateContainer within sandbox \"9c46011fa32ade320f40af754611ff4006f2393966cd3366f3c1411f3650421f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:16:26.763780 containerd[1462]: time="2025-05-17T00:16:26.763745153Z" level=info msg="CreateContainer within sandbox \"9c46011fa32ade320f40af754611ff4006f2393966cd3366f3c1411f3650421f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ebe9e04326fb459996e97b6e5a3d6365af4388d64e16d1539a6f8703a4e4b8ed\"" May 17 00:16:26.764248 containerd[1462]: time="2025-05-17T00:16:26.764216434Z" level=info msg="StartContainer for \"ebe9e04326fb459996e97b6e5a3d6365af4388d64e16d1539a6f8703a4e4b8ed\"" May 17 00:16:26.790931 systemd[1]: Started cri-containerd-ebe9e04326fb459996e97b6e5a3d6365af4388d64e16d1539a6f8703a4e4b8ed.scope - libcontainer container ebe9e04326fb459996e97b6e5a3d6365af4388d64e16d1539a6f8703a4e4b8ed. May 17 00:16:26.831619 containerd[1462]: time="2025-05-17T00:16:26.831577986Z" level=info msg="StartContainer for \"ebe9e04326fb459996e97b6e5a3d6365af4388d64e16d1539a6f8703a4e4b8ed\" returns successfully" May 17 00:16:27.511997 kubelet[2482]: E0517 00:16:27.511941 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:27.585118 kubelet[2482]: E0517 00:16:27.585080 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:27.601897 kubelet[2482]: I0517 00:16:27.601823 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-747fc8547c-2qc88" podStartSLOduration=2.02801304 podStartE2EDuration="4.601800684s" podCreationTimestamp="2025-05-17 00:16:23 +0000 UTC" firstStartedPulling="2025-05-17 00:16:24.157389848 +0000 UTC m=+17.728072594" lastFinishedPulling="2025-05-17 00:16:26.731177482 +0000 UTC m=+20.301860238" observedRunningTime="2025-05-17 00:16:27.598846781 +0000 UTC m=+21.169529537" watchObservedRunningTime="2025-05-17 00:16:27.601800684 +0000 UTC m=+21.172483440" May 17 00:16:27.629398 kubelet[2482]: E0517 00:16:27.629356 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.629398 kubelet[2482]: W0517 00:16:27.629385 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.630040 kubelet[2482]: E0517 00:16:27.630021 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.630249 kubelet[2482]: E0517 00:16:27.630216 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.630249 kubelet[2482]: W0517 00:16:27.630226 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.630249 kubelet[2482]: E0517 00:16:27.630235 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.630421 kubelet[2482]: E0517 00:16:27.630409 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.630470 kubelet[2482]: W0517 00:16:27.630433 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.630470 kubelet[2482]: E0517 00:16:27.630444 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.630719 kubelet[2482]: E0517 00:16:27.630707 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.630719 kubelet[2482]: W0517 00:16:27.630717 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.630869 kubelet[2482]: E0517 00:16:27.630726 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.630939 kubelet[2482]: E0517 00:16:27.630926 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.630939 kubelet[2482]: W0517 00:16:27.630937 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.631015 kubelet[2482]: E0517 00:16:27.630944 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.631117 kubelet[2482]: E0517 00:16:27.631106 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.631117 kubelet[2482]: W0517 00:16:27.631115 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.631174 kubelet[2482]: E0517 00:16:27.631123 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.631336 kubelet[2482]: E0517 00:16:27.631324 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.631513 kubelet[2482]: W0517 00:16:27.631393 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.631513 kubelet[2482]: E0517 00:16:27.631406 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.631738 kubelet[2482]: E0517 00:16:27.631646 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.631738 kubelet[2482]: W0517 00:16:27.631683 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.631738 kubelet[2482]: E0517 00:16:27.631693 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.632024 kubelet[2482]: E0517 00:16:27.631970 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.632024 kubelet[2482]: W0517 00:16:27.631981 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.632024 kubelet[2482]: E0517 00:16:27.631990 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.632357 kubelet[2482]: E0517 00:16:27.632262 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.632357 kubelet[2482]: W0517 00:16:27.632271 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.632357 kubelet[2482]: E0517 00:16:27.632281 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.632509 kubelet[2482]: E0517 00:16:27.632490 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.632635 kubelet[2482]: W0517 00:16:27.632551 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.632635 kubelet[2482]: E0517 00:16:27.632562 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.632817 kubelet[2482]: E0517 00:16:27.632756 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.632817 kubelet[2482]: W0517 00:16:27.632765 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.632817 kubelet[2482]: E0517 00:16:27.632773 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.633269 kubelet[2482]: E0517 00:16:27.633169 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.633269 kubelet[2482]: W0517 00:16:27.633179 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.633269 kubelet[2482]: E0517 00:16:27.633188 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.633478 kubelet[2482]: E0517 00:16:27.633356 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.633478 kubelet[2482]: W0517 00:16:27.633364 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.633478 kubelet[2482]: E0517 00:16:27.633372 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.633619 kubelet[2482]: E0517 00:16:27.633608 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.633729 kubelet[2482]: W0517 00:16:27.633659 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.633729 kubelet[2482]: E0517 00:16:27.633670 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.657300 kubelet[2482]: E0517 00:16:27.657265 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.657300 kubelet[2482]: W0517 00:16:27.657278 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.657300 kubelet[2482]: E0517 00:16:27.657287 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.657518 kubelet[2482]: E0517 00:16:27.657483 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.657518 kubelet[2482]: W0517 00:16:27.657491 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.657518 kubelet[2482]: E0517 00:16:27.657510 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.657726 kubelet[2482]: E0517 00:16:27.657708 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.657726 kubelet[2482]: W0517 00:16:27.657721 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.657790 kubelet[2482]: E0517 00:16:27.657735 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.657939 kubelet[2482]: E0517 00:16:27.657922 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.657939 kubelet[2482]: W0517 00:16:27.657933 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.657991 kubelet[2482]: E0517 00:16:27.657944 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.658131 kubelet[2482]: E0517 00:16:27.658108 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.658131 kubelet[2482]: W0517 00:16:27.658119 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.658131 kubelet[2482]: E0517 00:16:27.658130 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.658297 kubelet[2482]: E0517 00:16:27.658281 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.658297 kubelet[2482]: W0517 00:16:27.658291 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.658347 kubelet[2482]: E0517 00:16:27.658303 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.658491 kubelet[2482]: E0517 00:16:27.658465 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.658491 kubelet[2482]: W0517 00:16:27.658480 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.658491 kubelet[2482]: E0517 00:16:27.658493 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.658686 kubelet[2482]: E0517 00:16:27.658662 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.658721 kubelet[2482]: W0517 00:16:27.658672 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.658721 kubelet[2482]: E0517 00:16:27.658699 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.658877 kubelet[2482]: E0517 00:16:27.658864 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.658877 kubelet[2482]: W0517 00:16:27.658874 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.658935 kubelet[2482]: E0517 00:16:27.658894 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.659044 kubelet[2482]: E0517 00:16:27.659030 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.659044 kubelet[2482]: W0517 00:16:27.659040 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.659097 kubelet[2482]: E0517 00:16:27.659057 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.659240 kubelet[2482]: E0517 00:16:27.659221 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.659240 kubelet[2482]: W0517 00:16:27.659233 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.659363 kubelet[2482]: E0517 00:16:27.659248 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.659504 kubelet[2482]: E0517 00:16:27.659478 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.659504 kubelet[2482]: W0517 00:16:27.659490 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.659552 kubelet[2482]: E0517 00:16:27.659511 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.659856 kubelet[2482]: E0517 00:16:27.659838 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.659856 kubelet[2482]: W0517 00:16:27.659851 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.659922 kubelet[2482]: E0517 00:16:27.659865 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.660031 kubelet[2482]: E0517 00:16:27.660019 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.660031 kubelet[2482]: W0517 00:16:27.660028 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.660079 kubelet[2482]: E0517 00:16:27.660040 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.660236 kubelet[2482]: E0517 00:16:27.660221 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.660236 kubelet[2482]: W0517 00:16:27.660231 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.660328 kubelet[2482]: E0517 00:16:27.660242 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.660443 kubelet[2482]: E0517 00:16:27.660429 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.660443 kubelet[2482]: W0517 00:16:27.660440 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.660495 kubelet[2482]: E0517 00:16:27.660453 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.660649 kubelet[2482]: E0517 00:16:27.660636 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.660649 kubelet[2482]: W0517 00:16:27.660646 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.660728 kubelet[2482]: E0517 00:16:27.660657 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:27.660861 kubelet[2482]: E0517 00:16:27.660849 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:16:27.660861 kubelet[2482]: W0517 00:16:27.660858 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:16:27.660907 kubelet[2482]: E0517 00:16:27.660866 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:16:28.197260 containerd[1462]: time="2025-05-17T00:16:28.197215204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:28.198192 containerd[1462]: time="2025-05-17T00:16:28.198150981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:16:28.199418 containerd[1462]: time="2025-05-17T00:16:28.199379061Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:28.201610 containerd[1462]: time="2025-05-17T00:16:28.201575851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:28.202157 containerd[1462]: time="2025-05-17T00:16:28.202120790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.470654763s" May 17 00:16:28.202157 containerd[1462]: time="2025-05-17T00:16:28.202151138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:16:28.204884 containerd[1462]: time="2025-05-17T00:16:28.204241827Z" level=info msg="CreateContainer within sandbox \"b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:16:28.220357 containerd[1462]: time="2025-05-17T00:16:28.220319007Z" level=info msg="CreateContainer within sandbox \"b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3\"" May 17 00:16:28.220799 containerd[1462]: time="2025-05-17T00:16:28.220775920Z" level=info msg="StartContainer for \"151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3\"" May 17 00:16:28.253798 systemd[1]: Started cri-containerd-151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3.scope - libcontainer container 151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3. May 17 00:16:28.280749 containerd[1462]: time="2025-05-17T00:16:28.280705049Z" level=info msg="StartContainer for \"151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3\" returns successfully" May 17 00:16:28.292114 systemd[1]: cri-containerd-151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3.scope: Deactivated successfully. May 17 00:16:28.575076 containerd[1462]: time="2025-05-17T00:16:28.572401281Z" level=info msg="shim disconnected" id=151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3 namespace=k8s.io May 17 00:16:28.575076 containerd[1462]: time="2025-05-17T00:16:28.575068009Z" level=warning msg="cleaning up after shim disconnected" id=151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3 namespace=k8s.io May 17 00:16:28.575076 containerd[1462]: time="2025-05-17T00:16:28.575086764Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:16:28.588189 kubelet[2482]: I0517 00:16:28.588158 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:16:28.588786 kubelet[2482]: E0517 00:16:28.588396 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:29.216150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-151bacddff66f6b541c5ec6c11dc3636e5b201f00a648010b1cd46c6f72c47c3-rootfs.mount: Deactivated successfully. May 17 00:16:29.501220 kubelet[2482]: E0517 00:16:29.501073 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:29.592618 containerd[1462]: time="2025-05-17T00:16:29.592571582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:16:31.501193 kubelet[2482]: E0517 00:16:31.501151 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:33.052081 containerd[1462]: time="2025-05-17T00:16:33.052017140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:33.053109 containerd[1462]: time="2025-05-17T00:16:33.053027484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:16:33.054295 containerd[1462]: time="2025-05-17T00:16:33.054245270Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:33.056501 containerd[1462]: time="2025-05-17T00:16:33.056441310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:33.057120 containerd[1462]: time="2025-05-17T00:16:33.057067421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.464456034s" May 17 00:16:33.057120 containerd[1462]: time="2025-05-17T00:16:33.057111444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:16:33.059636 containerd[1462]: time="2025-05-17T00:16:33.059595307Z" level=info msg="CreateContainer within sandbox \"b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:16:33.078268 containerd[1462]: time="2025-05-17T00:16:33.078222146Z" level=info msg="CreateContainer within sandbox \"b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239\"" May 17 00:16:33.078717 containerd[1462]: time="2025-05-17T00:16:33.078696910Z" level=info msg="StartContainer for \"41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239\"" May 17 00:16:33.110814 systemd[1]: Started cri-containerd-41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239.scope - libcontainer container 41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239. May 17 00:16:33.139039 containerd[1462]: time="2025-05-17T00:16:33.138995896Z" level=info msg="StartContainer for \"41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239\" returns successfully" May 17 00:16:33.501866 kubelet[2482]: E0517 00:16:33.501818 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:34.540562 containerd[1462]: time="2025-05-17T00:16:34.540515085Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:16:34.543529 systemd[1]: cri-containerd-41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239.scope: Deactivated successfully. May 17 00:16:34.566004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239-rootfs.mount: Deactivated successfully. May 17 00:16:34.608920 kubelet[2482]: I0517 00:16:34.608887 2482 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:16:34.759263 systemd[1]: Created slice kubepods-besteffort-pod0222fa69_3882_4194_90c7_5cf5983ab063.slice - libcontainer container kubepods-besteffort-pod0222fa69_3882_4194_90c7_5cf5983ab063.slice. May 17 00:16:34.764618 systemd[1]: Created slice kubepods-besteffort-pod2af7b115_8c11_4444_9b1c_fa1f02b3517f.slice - libcontainer container kubepods-besteffort-pod2af7b115_8c11_4444_9b1c_fa1f02b3517f.slice. May 17 00:16:34.770670 systemd[1]: Created slice kubepods-besteffort-pod54da1e60_c26d_45aa_84ac_d213e8845274.slice - libcontainer container kubepods-besteffort-pod54da1e60_c26d_45aa_84ac_d213e8845274.slice. May 17 00:16:34.775017 containerd[1462]: time="2025-05-17T00:16:34.774935620Z" level=info msg="shim disconnected" id=41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239 namespace=k8s.io May 17 00:16:34.775017 containerd[1462]: time="2025-05-17T00:16:34.774995312Z" level=warning msg="cleaning up after shim disconnected" id=41a5e7629dee635b7234c1dafbc346298762301812af60734e66d184238d7239 namespace=k8s.io May 17 00:16:34.775017 containerd[1462]: time="2025-05-17T00:16:34.775006444Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:16:34.779736 systemd[1]: Created slice kubepods-besteffort-poda14251ec_79d4_49b3_94ab_87e70e4faa0d.slice - libcontainer container kubepods-besteffort-poda14251ec_79d4_49b3_94ab_87e70e4faa0d.slice. May 17 00:16:34.786851 systemd[1]: Created slice kubepods-burstable-pod2864e22f_90c3_40b7_81ed_054edc334c43.slice - libcontainer container kubepods-burstable-pod2864e22f_90c3_40b7_81ed_054edc334c43.slice. May 17 00:16:34.792990 systemd[1]: Created slice kubepods-burstable-podfbfa8f9e_2caa_4166_b768_e488cc5c9d0d.slice - libcontainer container kubepods-burstable-podfbfa8f9e_2caa_4166_b768_e488cc5c9d0d.slice. May 17 00:16:34.802756 systemd[1]: Created slice kubepods-besteffort-pode170e92d_6fac_4790_9e44_4b5889f835a0.slice - libcontainer container kubepods-besteffort-pode170e92d_6fac_4790_9e44_4b5889f835a0.slice. May 17 00:16:34.805029 kubelet[2482]: I0517 00:16:34.804999 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e170e92d-6fac-4790-9e44-4b5889f835a0-calico-apiserver-certs\") pod \"calico-apiserver-67f459565f-mjks8\" (UID: \"e170e92d-6fac-4790-9e44-4b5889f835a0\") " pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" May 17 00:16:34.805097 kubelet[2482]: I0517 00:16:34.805034 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnww\" (UniqueName: \"kubernetes.io/projected/e170e92d-6fac-4790-9e44-4b5889f835a0-kube-api-access-kpnww\") pod \"calico-apiserver-67f459565f-mjks8\" (UID: \"e170e92d-6fac-4790-9e44-4b5889f835a0\") " pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" May 17 00:16:34.805097 kubelet[2482]: I0517 00:16:34.805053 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-ca-bundle\") pod \"whisker-5ff7b45b78-h4g9j\" (UID: \"0222fa69-3882-4194-90c7-5cf5983ab063\") " pod="calico-system/whisker-5ff7b45b78-h4g9j" May 17 00:16:34.805097 kubelet[2482]: I0517 00:16:34.805068 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cs6m\" (UniqueName: \"kubernetes.io/projected/fbfa8f9e-2caa-4166-b768-e488cc5c9d0d-kube-api-access-2cs6m\") pod \"coredns-668d6bf9bc-vmtqw\" (UID: \"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d\") " pod="kube-system/coredns-668d6bf9bc-vmtqw" May 17 00:16:34.805097 kubelet[2482]: I0517 00:16:34.805083 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56v7r\" (UniqueName: \"kubernetes.io/projected/0222fa69-3882-4194-90c7-5cf5983ab063-kube-api-access-56v7r\") pod \"whisker-5ff7b45b78-h4g9j\" (UID: \"0222fa69-3882-4194-90c7-5cf5983ab063\") " pod="calico-system/whisker-5ff7b45b78-h4g9j" May 17 00:16:34.805213 kubelet[2482]: I0517 00:16:34.805099 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcht\" (UniqueName: \"kubernetes.io/projected/2864e22f-90c3-40b7-81ed-054edc334c43-kube-api-access-nhcht\") pod \"coredns-668d6bf9bc-xdkt4\" (UID: \"2864e22f-90c3-40b7-81ed-054edc334c43\") " pod="kube-system/coredns-668d6bf9bc-xdkt4" May 17 00:16:34.805213 kubelet[2482]: I0517 00:16:34.805114 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2864e22f-90c3-40b7-81ed-054edc334c43-config-volume\") pod \"coredns-668d6bf9bc-xdkt4\" (UID: \"2864e22f-90c3-40b7-81ed-054edc334c43\") " pod="kube-system/coredns-668d6bf9bc-xdkt4" May 17 00:16:34.805213 kubelet[2482]: I0517 00:16:34.805129 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a14251ec-79d4-49b3-94ab-87e70e4faa0d-tigera-ca-bundle\") pod \"calico-kube-controllers-7cd784c9b6-wxxjz\" (UID: \"a14251ec-79d4-49b3-94ab-87e70e4faa0d\") " pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" May 17 00:16:34.805213 kubelet[2482]: I0517 00:16:34.805146 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fpj\" (UniqueName: \"kubernetes.io/projected/2af7b115-8c11-4444-9b1c-fa1f02b3517f-kube-api-access-g7fpj\") pod \"goldmane-78d55f7ddc-8b9gs\" (UID: \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\") " pod="calico-system/goldmane-78d55f7ddc-8b9gs" May 17 00:16:34.805213 kubelet[2482]: I0517 00:16:34.805160 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727kt\" (UniqueName: \"kubernetes.io/projected/54da1e60-c26d-45aa-84ac-d213e8845274-kube-api-access-727kt\") pod \"calico-apiserver-67f459565f-9ljdj\" (UID: \"54da1e60-c26d-45aa-84ac-d213e8845274\") " pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" May 17 00:16:34.805344 kubelet[2482]: I0517 00:16:34.805175 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbfa8f9e-2caa-4166-b768-e488cc5c9d0d-config-volume\") pod \"coredns-668d6bf9bc-vmtqw\" (UID: \"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d\") " pod="kube-system/coredns-668d6bf9bc-vmtqw" May 17 00:16:34.805344 kubelet[2482]: I0517 00:16:34.805194 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkjp\" (UniqueName: \"kubernetes.io/projected/a14251ec-79d4-49b3-94ab-87e70e4faa0d-kube-api-access-2qkjp\") pod \"calico-kube-controllers-7cd784c9b6-wxxjz\" (UID: \"a14251ec-79d4-49b3-94ab-87e70e4faa0d\") " pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" May 17 00:16:34.805344 kubelet[2482]: I0517 00:16:34.805211 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2af7b115-8c11-4444-9b1c-fa1f02b3517f-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-8b9gs\" (UID: \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\") " pod="calico-system/goldmane-78d55f7ddc-8b9gs" May 17 00:16:34.805344 kubelet[2482]: I0517 00:16:34.805225 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-backend-key-pair\") pod \"whisker-5ff7b45b78-h4g9j\" (UID: \"0222fa69-3882-4194-90c7-5cf5983ab063\") " pod="calico-system/whisker-5ff7b45b78-h4g9j" May 17 00:16:34.805344 kubelet[2482]: I0517 00:16:34.805240 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af7b115-8c11-4444-9b1c-fa1f02b3517f-config\") pod \"goldmane-78d55f7ddc-8b9gs\" (UID: \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\") " pod="calico-system/goldmane-78d55f7ddc-8b9gs" May 17 00:16:34.805494 kubelet[2482]: I0517 00:16:34.805254 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/54da1e60-c26d-45aa-84ac-d213e8845274-calico-apiserver-certs\") pod \"calico-apiserver-67f459565f-9ljdj\" (UID: \"54da1e60-c26d-45aa-84ac-d213e8845274\") " pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" May 17 00:16:34.805494 kubelet[2482]: I0517 00:16:34.805268 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2af7b115-8c11-4444-9b1c-fa1f02b3517f-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-8b9gs\" (UID: \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\") " pod="calico-system/goldmane-78d55f7ddc-8b9gs" May 17 00:16:34.806915 containerd[1462]: time="2025-05-17T00:16:34.806870239Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:16:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:16:35.063354 containerd[1462]: time="2025-05-17T00:16:35.063315551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff7b45b78-h4g9j,Uid:0222fa69-3882-4194-90c7-5cf5983ab063,Namespace:calico-system,Attempt:0,}" May 17 00:16:35.068107 containerd[1462]: time="2025-05-17T00:16:35.068076582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-8b9gs,Uid:2af7b115-8c11-4444-9b1c-fa1f02b3517f,Namespace:calico-system,Attempt:0,}" May 17 00:16:35.074783 containerd[1462]: time="2025-05-17T00:16:35.074741139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-9ljdj,Uid:54da1e60-c26d-45aa-84ac-d213e8845274,Namespace:calico-apiserver,Attempt:0,}" May 17 00:16:35.083377 containerd[1462]: time="2025-05-17T00:16:35.083345721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd784c9b6-wxxjz,Uid:a14251ec-79d4-49b3-94ab-87e70e4faa0d,Namespace:calico-system,Attempt:0,}" May 17 00:16:35.089817 kubelet[2482]: E0517 00:16:35.089784 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:35.090134 containerd[1462]: time="2025-05-17T00:16:35.090083968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xdkt4,Uid:2864e22f-90c3-40b7-81ed-054edc334c43,Namespace:kube-system,Attempt:0,}" May 17 00:16:35.099007 kubelet[2482]: E0517 00:16:35.098983 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:35.099408 containerd[1462]: time="2025-05-17T00:16:35.099369775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmtqw,Uid:fbfa8f9e-2caa-4166-b768-e488cc5c9d0d,Namespace:kube-system,Attempt:0,}" May 17 00:16:35.106049 containerd[1462]: time="2025-05-17T00:16:35.106018652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-mjks8,Uid:e170e92d-6fac-4790-9e44-4b5889f835a0,Namespace:calico-apiserver,Attempt:0,}" May 17 00:16:35.339822 containerd[1462]: time="2025-05-17T00:16:35.339358439Z" level=error msg="Failed to destroy network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.340826 containerd[1462]: time="2025-05-17T00:16:35.340789665Z" level=error msg="Failed to destroy network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.342532 containerd[1462]: time="2025-05-17T00:16:35.341710941Z" level=error msg="Failed to destroy network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.342532 containerd[1462]: time="2025-05-17T00:16:35.341933380Z" level=error msg="Failed to destroy network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.347772 containerd[1462]: time="2025-05-17T00:16:35.347737216Z" level=error msg="encountered an error cleaning up failed sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.347822 containerd[1462]: time="2025-05-17T00:16:35.347770629Z" level=error msg="Failed to destroy network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.347822 containerd[1462]: time="2025-05-17T00:16:35.347801987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-mjks8,Uid:e170e92d-6fac-4790-9e44-4b5889f835a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.347922 containerd[1462]: time="2025-05-17T00:16:35.347836684Z" level=error msg="Failed to destroy network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.347999 containerd[1462]: time="2025-05-17T00:16:35.347961148Z" level=error msg="encountered an error cleaning up failed sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348083 containerd[1462]: time="2025-05-17T00:16:35.348019998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd784c9b6-wxxjz,Uid:a14251ec-79d4-49b3-94ab-87e70e4faa0d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348155 containerd[1462]: time="2025-05-17T00:16:35.347749940Z" level=error msg="encountered an error cleaning up failed sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348155 containerd[1462]: time="2025-05-17T00:16:35.348111351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff7b45b78-h4g9j,Uid:0222fa69-3882-4194-90c7-5cf5983ab063,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348220 containerd[1462]: time="2025-05-17T00:16:35.348123704Z" level=error msg="encountered an error cleaning up failed sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348220 containerd[1462]: time="2025-05-17T00:16:35.348162587Z" level=error msg="Failed to destroy network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348220 containerd[1462]: time="2025-05-17T00:16:35.348181694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xdkt4,Uid:2864e22f-90c3-40b7-81ed-054edc334c43,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348299 containerd[1462]: time="2025-05-17T00:16:35.347759468Z" level=error msg="encountered an error cleaning up failed sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348299 containerd[1462]: time="2025-05-17T00:16:35.348258458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-8b9gs,Uid:2af7b115-8c11-4444-9b1c-fa1f02b3517f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348553 containerd[1462]: time="2025-05-17T00:16:35.348473974Z" level=error msg="encountered an error cleaning up failed sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348553 containerd[1462]: time="2025-05-17T00:16:35.348513558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmtqw,Uid:fbfa8f9e-2caa-4166-b768-e488cc5c9d0d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348553 containerd[1462]: time="2025-05-17T00:16:35.348542914Z" level=error msg="encountered an error cleaning up failed sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.348657 containerd[1462]: time="2025-05-17T00:16:35.348579593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-9ljdj,Uid:54da1e60-c26d-45aa-84ac-d213e8845274,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357026 kubelet[2482]: E0517 00:16:35.356788 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357026 kubelet[2482]: E0517 00:16:35.356818 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357026 kubelet[2482]: E0517 00:16:35.356838 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357026 kubelet[2482]: E0517 00:16:35.356825 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357026 kubelet[2482]: E0517 00:16:35.356829 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357214 kubelet[2482]: E0517 00:16:35.356861 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" May 17 00:16:35.357214 kubelet[2482]: E0517 00:16:35.356875 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357214 kubelet[2482]: E0517 00:16:35.356882 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" May 17 00:16:35.357214 kubelet[2482]: E0517 00:16:35.356892 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vmtqw" May 17 00:16:35.357305 kubelet[2482]: E0517 00:16:35.356893 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" May 17 00:16:35.357305 kubelet[2482]: E0517 00:16:35.356790 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.357305 kubelet[2482]: E0517 00:16:35.356907 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vmtqw" May 17 00:16:35.357305 kubelet[2482]: E0517 00:16:35.356915 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" May 17 00:16:35.357396 kubelet[2482]: E0517 00:16:35.356917 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" May 17 00:16:35.357396 kubelet[2482]: E0517 00:16:35.356933 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" May 17 00:16:35.357396 kubelet[2482]: E0517 00:16:35.356924 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f459565f-mjks8_calico-apiserver(e170e92d-6fac-4790-9e44-4b5889f835a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f459565f-mjks8_calico-apiserver(e170e92d-6fac-4790-9e44-4b5889f835a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" podUID="e170e92d-6fac-4790-9e44-4b5889f835a0" May 17 00:16:35.357500 kubelet[2482]: E0517 00:16:35.356938 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vmtqw_kube-system(fbfa8f9e-2caa-4166-b768-e488cc5c9d0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vmtqw_kube-system(fbfa8f9e-2caa-4166-b768-e488cc5c9d0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vmtqw" podUID="fbfa8f9e-2caa-4166-b768-e488cc5c9d0d" May 17 00:16:35.357500 kubelet[2482]: E0517 00:16:35.356861 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-8b9gs" May 17 00:16:35.357500 kubelet[2482]: E0517 00:16:35.356962 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f459565f-9ljdj_calico-apiserver(54da1e60-c26d-45aa-84ac-d213e8845274)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f459565f-9ljdj_calico-apiserver(54da1e60-c26d-45aa-84ac-d213e8845274)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" podUID="54da1e60-c26d-45aa-84ac-d213e8845274" May 17 00:16:35.357599 kubelet[2482]: E0517 00:16:35.356961 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cd784c9b6-wxxjz_calico-system(a14251ec-79d4-49b3-94ab-87e70e4faa0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cd784c9b6-wxxjz_calico-system(a14251ec-79d4-49b3-94ab-87e70e4faa0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" podUID="a14251ec-79d4-49b3-94ab-87e70e4faa0d" May 17 00:16:35.357599 kubelet[2482]: E0517 00:16:35.356893 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ff7b45b78-h4g9j" May 17 00:16:35.357599 kubelet[2482]: E0517 00:16:35.357003 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ff7b45b78-h4g9j" May 17 00:16:35.357782 kubelet[2482]: E0517 00:16:35.357027 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5ff7b45b78-h4g9j_calico-system(0222fa69-3882-4194-90c7-5cf5983ab063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5ff7b45b78-h4g9j_calico-system(0222fa69-3882-4194-90c7-5cf5983ab063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ff7b45b78-h4g9j" podUID="0222fa69-3882-4194-90c7-5cf5983ab063" May 17 00:16:35.357782 kubelet[2482]: E0517 00:16:35.356970 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-8b9gs" May 17 00:16:35.357782 kubelet[2482]: E0517 00:16:35.356990 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xdkt4" May 17 00:16:35.357869 kubelet[2482]: E0517 00:16:35.357062 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-8b9gs_calico-system(2af7b115-8c11-4444-9b1c-fa1f02b3517f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-8b9gs_calico-system(2af7b115-8c11-4444-9b1c-fa1f02b3517f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:16:35.357869 kubelet[2482]: E0517 00:16:35.357070 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xdkt4" May 17 00:16:35.357869 kubelet[2482]: E0517 00:16:35.357097 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xdkt4_kube-system(2864e22f-90c3-40b7-81ed-054edc334c43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xdkt4_kube-system(2864e22f-90c3-40b7-81ed-054edc334c43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xdkt4" podUID="2864e22f-90c3-40b7-81ed-054edc334c43" May 17 00:16:35.506892 systemd[1]: Created slice kubepods-besteffort-podbe1c8442_ea83_4a8e_9428_f2f62d4e4acf.slice - libcontainer container kubepods-besteffort-podbe1c8442_ea83_4a8e_9428_f2f62d4e4acf.slice. May 17 00:16:35.508850 containerd[1462]: time="2025-05-17T00:16:35.508820948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9cgfz,Uid:be1c8442-ea83-4a8e-9428-f2f62d4e4acf,Namespace:calico-system,Attempt:0,}" May 17 00:16:35.561508 containerd[1462]: time="2025-05-17T00:16:35.561345624Z" level=error msg="Failed to destroy network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.561929 containerd[1462]: time="2025-05-17T00:16:35.561762219Z" level=error msg="encountered an error cleaning up failed sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.561929 containerd[1462]: time="2025-05-17T00:16:35.561808185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9cgfz,Uid:be1c8442-ea83-4a8e-9428-f2f62d4e4acf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.562052 kubelet[2482]: E0517 00:16:35.562012 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.562122 kubelet[2482]: E0517 00:16:35.562070 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9cgfz" May 17 00:16:35.562122 kubelet[2482]: E0517 00:16:35.562089 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9cgfz" May 17 00:16:35.562188 kubelet[2482]: E0517 00:16:35.562129 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9cgfz_calico-system(be1c8442-ea83-4a8e-9428-f2f62d4e4acf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9cgfz_calico-system(be1c8442-ea83-4a8e-9428-f2f62d4e4acf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:35.604494 kubelet[2482]: I0517 00:16:35.604262 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:16:35.606300 kubelet[2482]: I0517 00:16:35.605555 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:16:35.606368 containerd[1462]: time="2025-05-17T00:16:35.604640049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:16:35.608874 kubelet[2482]: I0517 00:16:35.608714 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:16:35.610435 kubelet[2482]: I0517 00:16:35.610407 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:16:35.638811 containerd[1462]: time="2025-05-17T00:16:35.638757553Z" level=info msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\"" May 17 00:16:35.639285 containerd[1462]: time="2025-05-17T00:16:35.639044964Z" level=info msg="Ensure that sandbox 457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e in task-service has been cleanup successfully" May 17 00:16:35.640128 kubelet[2482]: I0517 00:16:35.639613 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:16:35.640372 containerd[1462]: time="2025-05-17T00:16:35.640337690Z" level=info msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\"" May 17 00:16:35.641986 containerd[1462]: time="2025-05-17T00:16:35.640471212Z" level=info msg="Ensure that sandbox 75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607 in task-service has been cleanup successfully" May 17 00:16:35.641986 containerd[1462]: time="2025-05-17T00:16:35.638779595Z" level=info msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\"" May 17 00:16:35.641986 containerd[1462]: time="2025-05-17T00:16:35.638757804Z" level=info msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\"" May 17 00:16:35.641986 containerd[1462]: time="2025-05-17T00:16:35.641715807Z" level=info msg="Ensure that sandbox cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a in task-service has been cleanup successfully" May 17 00:16:35.642224 containerd[1462]: time="2025-05-17T00:16:35.642195070Z" level=info msg="Ensure that sandbox bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358 in task-service has been cleanup successfully" May 17 00:16:35.661907 containerd[1462]: time="2025-05-17T00:16:35.661832070Z" level=info msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\"" May 17 00:16:35.663050 kubelet[2482]: I0517 00:16:35.663023 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:16:35.664495 containerd[1462]: time="2025-05-17T00:16:35.664432069Z" level=info msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\"" May 17 00:16:35.664783 containerd[1462]: time="2025-05-17T00:16:35.664588253Z" level=info msg="Ensure that sandbox 4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8 in task-service has been cleanup successfully" May 17 00:16:35.665407 containerd[1462]: time="2025-05-17T00:16:35.665385976Z" level=info msg="Ensure that sandbox 2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176 in task-service has been cleanup successfully" May 17 00:16:35.669333 kubelet[2482]: I0517 00:16:35.669294 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:16:35.674052 containerd[1462]: time="2025-05-17T00:16:35.674009875Z" level=info msg="StopPodSandbox for \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\"" May 17 00:16:35.674957 kubelet[2482]: I0517 00:16:35.674879 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:16:35.675371 containerd[1462]: time="2025-05-17T00:16:35.674198711Z" level=info msg="Ensure that sandbox 0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374 in task-service has been cleanup successfully" May 17 00:16:35.679423 containerd[1462]: time="2025-05-17T00:16:35.679332774Z" level=info msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\"" May 17 00:16:35.679584 containerd[1462]: time="2025-05-17T00:16:35.679559982Z" level=info msg="Ensure that sandbox fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116 in task-service has been cleanup successfully" May 17 00:16:35.688982 containerd[1462]: time="2025-05-17T00:16:35.688932172Z" level=error msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" failed" error="failed to destroy network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.689274 kubelet[2482]: E0517 00:16:35.689180 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:16:35.689327 kubelet[2482]: E0517 00:16:35.689252 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e"} May 17 00:16:35.689353 kubelet[2482]: E0517 00:16:35.689330 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0222fa69-3882-4194-90c7-5cf5983ab063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.689409 kubelet[2482]: E0517 00:16:35.689361 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0222fa69-3882-4194-90c7-5cf5983ab063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ff7b45b78-h4g9j" podUID="0222fa69-3882-4194-90c7-5cf5983ab063" May 17 00:16:35.692425 containerd[1462]: time="2025-05-17T00:16:35.692372754Z" level=error msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" failed" error="failed to destroy network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.694771 kubelet[2482]: E0517 00:16:35.694725 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:16:35.694849 kubelet[2482]: E0517 00:16:35.694784 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a"} May 17 00:16:35.694849 kubelet[2482]: E0517 00:16:35.694830 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a14251ec-79d4-49b3-94ab-87e70e4faa0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.694968 kubelet[2482]: E0517 00:16:35.694858 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a14251ec-79d4-49b3-94ab-87e70e4faa0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" podUID="a14251ec-79d4-49b3-94ab-87e70e4faa0d" May 17 00:16:35.704162 containerd[1462]: time="2025-05-17T00:16:35.704054845Z" level=error msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" failed" error="failed to destroy network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.704353 kubelet[2482]: E0517 00:16:35.704306 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:16:35.704397 kubelet[2482]: E0517 00:16:35.704370 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358"} May 17 00:16:35.704483 kubelet[2482]: E0517 00:16:35.704424 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2864e22f-90c3-40b7-81ed-054edc334c43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.704483 kubelet[2482]: E0517 00:16:35.704464 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2864e22f-90c3-40b7-81ed-054edc334c43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xdkt4" podUID="2864e22f-90c3-40b7-81ed-054edc334c43" May 17 00:16:35.704618 containerd[1462]: time="2025-05-17T00:16:35.704342478Z" level=error msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" failed" error="failed to destroy network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.705813 kubelet[2482]: E0517 00:16:35.704543 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:16:35.705813 kubelet[2482]: E0517 00:16:35.705184 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8"} May 17 00:16:35.705813 kubelet[2482]: E0517 00:16:35.705213 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.705813 kubelet[2482]: E0517 00:16:35.705236 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:16:35.714696 containerd[1462]: time="2025-05-17T00:16:35.713962854Z" level=error msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" failed" error="failed to destroy network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.714696 containerd[1462]: time="2025-05-17T00:16:35.714337620Z" level=error msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" failed" error="failed to destroy network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.714837 kubelet[2482]: E0517 00:16:35.714163 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:16:35.714837 kubelet[2482]: E0517 00:16:35.714205 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116"} May 17 00:16:35.714837 kubelet[2482]: E0517 00:16:35.714242 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.714837 kubelet[2482]: E0517 00:16:35.714268 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vmtqw" podUID="fbfa8f9e-2caa-4166-b768-e488cc5c9d0d" May 17 00:16:35.714998 kubelet[2482]: E0517 00:16:35.714592 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:16:35.714998 kubelet[2482]: E0517 00:16:35.714632 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607"} May 17 00:16:35.714998 kubelet[2482]: E0517 00:16:35.714662 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54da1e60-c26d-45aa-84ac-d213e8845274\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.714998 kubelet[2482]: E0517 00:16:35.714732 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54da1e60-c26d-45aa-84ac-d213e8845274\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" podUID="54da1e60-c26d-45aa-84ac-d213e8845274" May 17 00:16:35.721097 containerd[1462]: time="2025-05-17T00:16:35.721045829Z" level=error msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" failed" error="failed to destroy network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.721423 kubelet[2482]: E0517 00:16:35.721373 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:16:35.721484 kubelet[2482]: E0517 00:16:35.721440 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176"} May 17 00:16:35.721515 kubelet[2482]: E0517 00:16:35.721485 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e170e92d-6fac-4790-9e44-4b5889f835a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.721574 kubelet[2482]: E0517 00:16:35.721515 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e170e92d-6fac-4790-9e44-4b5889f835a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" podUID="e170e92d-6fac-4790-9e44-4b5889f835a0" May 17 00:16:35.725156 containerd[1462]: time="2025-05-17T00:16:35.725116129Z" level=error msg="StopPodSandbox for \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\" failed" error="failed to destroy network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:35.725259 kubelet[2482]: E0517 00:16:35.725238 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:16:35.725300 kubelet[2482]: E0517 00:16:35.725265 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374"} May 17 00:16:35.725300 kubelet[2482]: E0517 00:16:35.725287 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be1c8442-ea83-4a8e-9428-f2f62d4e4acf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:35.725367 kubelet[2482]: E0517 00:16:35.725304 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be1c8442-ea83-4a8e-9428-f2f62d4e4acf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9cgfz" podUID="be1c8442-ea83-4a8e-9428-f2f62d4e4acf" May 17 00:16:42.532530 systemd[1]: Started sshd@7-10.0.0.66:22-10.0.0.1:49318.service - OpenSSH per-connection server daemon (10.0.0.1:49318). May 17 00:16:42.566153 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 49318 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:42.568005 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:42.572303 systemd-logind[1446]: New session 8 of user core. May 17 00:16:42.581819 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:16:42.784066 sshd[3699]: pam_unix(sshd:session): session closed for user core May 17 00:16:42.789020 systemd[1]: sshd@7-10.0.0.66:22-10.0.0.1:49318.service: Deactivated successfully. May 17 00:16:42.791159 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:16:42.792040 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. May 17 00:16:42.792977 systemd-logind[1446]: Removed session 8. May 17 00:16:46.503507 containerd[1462]: time="2025-05-17T00:16:46.503461015Z" level=info msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\"" May 17 00:16:46.528418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount498857957.mount: Deactivated successfully. May 17 00:16:46.529404 containerd[1462]: time="2025-05-17T00:16:46.529231982Z" level=error msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" failed" error="failed to destroy network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:46.529662 kubelet[2482]: E0517 00:16:46.529469 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:16:46.529662 kubelet[2482]: E0517 00:16:46.529522 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176"} May 17 00:16:46.529662 kubelet[2482]: E0517 00:16:46.529557 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e170e92d-6fac-4790-9e44-4b5889f835a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:46.529662 kubelet[2482]: E0517 00:16:46.529584 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e170e92d-6fac-4790-9e44-4b5889f835a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" podUID="e170e92d-6fac-4790-9e44-4b5889f835a0" May 17 00:16:47.502605 containerd[1462]: time="2025-05-17T00:16:47.502219071Z" level=info msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\"" May 17 00:16:47.502605 containerd[1462]: time="2025-05-17T00:16:47.502267342Z" level=info msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\"" May 17 00:16:47.502605 containerd[1462]: time="2025-05-17T00:16:47.502218891Z" level=info msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\"" May 17 00:16:47.513644 containerd[1462]: time="2025-05-17T00:16:47.513511087Z" level=info msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\"" May 17 00:16:47.544335 containerd[1462]: time="2025-05-17T00:16:47.544281333Z" level=error msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" failed" error="failed to destroy network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:47.544546 kubelet[2482]: E0517 00:16:47.544506 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:16:47.544884 kubelet[2482]: E0517 00:16:47.544562 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607"} May 17 00:16:47.544884 kubelet[2482]: E0517 00:16:47.544599 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54da1e60-c26d-45aa-84ac-d213e8845274\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:47.544884 kubelet[2482]: E0517 00:16:47.544621 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54da1e60-c26d-45aa-84ac-d213e8845274\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" podUID="54da1e60-c26d-45aa-84ac-d213e8845274" May 17 00:16:47.546611 containerd[1462]: time="2025-05-17T00:16:47.546559856Z" level=error msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" failed" error="failed to destroy network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:47.546859 kubelet[2482]: E0517 00:16:47.546809 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:16:47.546910 kubelet[2482]: E0517 00:16:47.546869 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116"} May 17 00:16:47.546910 kubelet[2482]: E0517 00:16:47.546904 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:47.546981 kubelet[2482]: E0517 00:16:47.546927 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vmtqw" podUID="fbfa8f9e-2caa-4166-b768-e488cc5c9d0d" May 17 00:16:47.547034 containerd[1462]: time="2025-05-17T00:16:47.546906428Z" level=error msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" failed" error="failed to destroy network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:47.547063 kubelet[2482]: E0517 00:16:47.547015 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:16:47.547063 kubelet[2482]: E0517 00:16:47.547050 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8"} May 17 00:16:47.547115 kubelet[2482]: E0517 00:16:47.547076 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:47.547115 kubelet[2482]: E0517 00:16:47.547102 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2af7b115-8c11-4444-9b1c-fa1f02b3517f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:16:47.795830 systemd[1]: Started sshd@8-10.0.0.66:22-10.0.0.1:49324.service - OpenSSH per-connection server daemon (10.0.0.1:49324). May 17 00:16:48.686487 containerd[1462]: time="2025-05-17T00:16:48.686427346Z" level=error msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" failed" error="failed to destroy network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:48.687889 kubelet[2482]: E0517 00:16:48.687247 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:16:48.687889 kubelet[2482]: E0517 00:16:48.687308 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358"} May 17 00:16:48.687889 kubelet[2482]: E0517 00:16:48.687350 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2864e22f-90c3-40b7-81ed-054edc334c43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:48.687889 kubelet[2482]: E0517 00:16:48.687380 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2864e22f-90c3-40b7-81ed-054edc334c43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xdkt4" podUID="2864e22f-90c3-40b7-81ed-054edc334c43" May 17 00:16:48.827802 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 49324 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:48.829668 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:48.833765 systemd-logind[1446]: New session 9 of user core. May 17 00:16:48.849784 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:16:48.915142 containerd[1462]: time="2025-05-17T00:16:48.915073779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:48.954212 containerd[1462]: time="2025-05-17T00:16:48.954047896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:16:48.981004 containerd[1462]: time="2025-05-17T00:16:48.980949714Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:49.022944 containerd[1462]: time="2025-05-17T00:16:49.022888738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:49.023428 containerd[1462]: time="2025-05-17T00:16:49.023386083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 13.418712972s" May 17 00:16:49.023428 containerd[1462]: time="2025-05-17T00:16:49.023425307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:16:49.030892 containerd[1462]: time="2025-05-17T00:16:49.030856460Z" level=info msg="CreateContainer within sandbox \"b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:16:49.100724 sshd[3813]: pam_unix(sshd:session): session closed for user core May 17 00:16:49.105132 systemd[1]: sshd@8-10.0.0.66:22-10.0.0.1:49324.service: Deactivated successfully. May 17 00:16:49.107644 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:16:49.108287 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. May 17 00:16:49.109048 systemd-logind[1446]: Removed session 9. May 17 00:16:49.502275 containerd[1462]: time="2025-05-17T00:16:49.502235872Z" level=info msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\"" May 17 00:16:49.524355 containerd[1462]: time="2025-05-17T00:16:49.524290627Z" level=error msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" failed" error="failed to destroy network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:16:49.524584 kubelet[2482]: E0517 00:16:49.524535 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:16:49.524627 kubelet[2482]: E0517 00:16:49.524597 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a"} May 17 00:16:49.524653 kubelet[2482]: E0517 00:16:49.524633 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a14251ec-79d4-49b3-94ab-87e70e4faa0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:16:49.524725 kubelet[2482]: E0517 00:16:49.524655 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a14251ec-79d4-49b3-94ab-87e70e4faa0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" podUID="a14251ec-79d4-49b3-94ab-87e70e4faa0d" May 17 00:16:49.564881 containerd[1462]: time="2025-05-17T00:16:49.564846367Z" level=info msg="CreateContainer within sandbox \"b890b75932c51e7ba7dae6334fdf3553b7f9771f19026926783dccdee2f7f546\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"837846e8e28ef597e91e2c33f6de41bf2110d256fa2ca7819e9e23b20a750a39\"" May 17 00:16:49.565648 containerd[1462]: time="2025-05-17T00:16:49.565261066Z" level=info msg="StartContainer for \"837846e8e28ef597e91e2c33f6de41bf2110d256fa2ca7819e9e23b20a750a39\"" May 17 00:16:49.622797 systemd[1]: Started cri-containerd-837846e8e28ef597e91e2c33f6de41bf2110d256fa2ca7819e9e23b20a750a39.scope - libcontainer container 837846e8e28ef597e91e2c33f6de41bf2110d256fa2ca7819e9e23b20a750a39. May 17 00:16:49.728458 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:16:49.728569 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:16:49.748210 containerd[1462]: time="2025-05-17T00:16:49.748155431Z" level=info msg="StartContainer for \"837846e8e28ef597e91e2c33f6de41bf2110d256fa2ca7819e9e23b20a750a39\" returns successfully" May 17 00:16:49.955436 kubelet[2482]: I0517 00:16:49.955382 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l7bxb" podStartSLOduration=1.393650184 podStartE2EDuration="25.955367193s" podCreationTimestamp="2025-05-17 00:16:24 +0000 UTC" firstStartedPulling="2025-05-17 00:16:24.462377515 +0000 UTC m=+18.033060271" lastFinishedPulling="2025-05-17 00:16:49.024094524 +0000 UTC m=+42.594777280" observedRunningTime="2025-05-17 00:16:49.95447689 +0000 UTC m=+43.525159646" watchObservedRunningTime="2025-05-17 00:16:49.955367193 +0000 UTC m=+43.526049949" May 17 00:16:50.452630 kubelet[2482]: I0517 00:16:50.452396 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:16:50.452810 kubelet[2482]: E0517 00:16:50.452719 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:50.502463 containerd[1462]: time="2025-05-17T00:16:50.502394532Z" level=info msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\"" May 17 00:16:50.502763 containerd[1462]: time="2025-05-17T00:16:50.502732928Z" level=info msg="StopPodSandbox for \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\"" May 17 00:16:50.752718 kubelet[2482]: E0517 00:16:50.752564 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:16:51.060516 systemd[1]: run-containerd-runc-k8s.io-837846e8e28ef597e91e2c33f6de41bf2110d256fa2ca7819e9e23b20a750a39-runc.ajgA1D.mount: Deactivated successfully. May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.242 [INFO][3951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.243 [INFO][3951] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" iface="eth0" netns="/var/run/netns/cni-c077ac3d-a3f7-4f57-e408-30e5bb65c9ef" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.244 [INFO][3951] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" iface="eth0" netns="/var/run/netns/cni-c077ac3d-a3f7-4f57-e408-30e5bb65c9ef" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.244 [INFO][3951] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" iface="eth0" netns="/var/run/netns/cni-c077ac3d-a3f7-4f57-e408-30e5bb65c9ef" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.244 [INFO][3951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.245 [INFO][3951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.305 [INFO][3993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.306 [INFO][3993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.306 [INFO][3993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.400 [WARNING][3993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.400 [INFO][3993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.401 [INFO][3993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:16:51.407826 containerd[1462]: 2025-05-17 00:16:51.404 [INFO][3951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:16:51.408593 containerd[1462]: time="2025-05-17T00:16:51.407932606Z" level=info msg="TearDown network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\" successfully" May 17 00:16:51.408593 containerd[1462]: time="2025-05-17T00:16:51.407964977Z" level=info msg="StopPodSandbox for \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\" returns successfully" May 17 00:16:51.408974 containerd[1462]: time="2025-05-17T00:16:51.408925311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9cgfz,Uid:be1c8442-ea83-4a8e-9428-f2f62d4e4acf,Namespace:calico-system,Attempt:1,}" May 17 00:16:51.411248 systemd[1]: run-netns-cni\x2dc077ac3d\x2da3f7\x2d4f57\x2de408\x2d30e5bb65c9ef.mount: Deactivated successfully. May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.243 [INFO][3950] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.244 [INFO][3950] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" iface="eth0" netns="/var/run/netns/cni-9fbff0ab-9999-eee1-ac84-a64f9f7481f0" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.244 [INFO][3950] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" iface="eth0" netns="/var/run/netns/cni-9fbff0ab-9999-eee1-ac84-a64f9f7481f0" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.244 [INFO][3950] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" iface="eth0" netns="/var/run/netns/cni-9fbff0ab-9999-eee1-ac84-a64f9f7481f0" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.245 [INFO][3950] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.245 [INFO][3950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.305 [INFO][3994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.306 [INFO][3994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.401 [INFO][3994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.590 [WARNING][3994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.590 [INFO][3994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.591 [INFO][3994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:16:51.597338 containerd[1462]: 2025-05-17 00:16:51.594 [INFO][3950] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:16:51.599837 containerd[1462]: time="2025-05-17T00:16:51.599791207Z" level=info msg="TearDown network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" successfully" May 17 00:16:51.599837 containerd[1462]: time="2025-05-17T00:16:51.599829368Z" level=info msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" returns successfully" May 17 00:16:51.599847 systemd[1]: run-netns-cni\x2d9fbff0ab\x2d9999\x2deee1\x2dac84\x2da64f9f7481f0.mount: Deactivated successfully. May 17 00:16:51.600849 containerd[1462]: time="2025-05-17T00:16:51.600729550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff7b45b78-h4g9j,Uid:0222fa69-3882-4194-90c7-5cf5983ab063,Namespace:calico-system,Attempt:1,}" May 17 00:16:52.522016 systemd-networkd[1397]: cali7d4b70ee9f6: Link UP May 17 00:16:52.522343 systemd-networkd[1397]: cali7d4b70ee9f6: Gained carrier May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.376 [INFO][4020] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.385 [INFO][4020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9cgfz-eth0 csi-node-driver- calico-system be1c8442-ea83-4a8e-9428-f2f62d4e4acf 994 0 2025-05-17 00:16:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9cgfz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7d4b70ee9f6 [] [] }} ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.385 [INFO][4020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.408 [INFO][4034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" HandleID="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.409 [INFO][4034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" HandleID="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f660), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9cgfz", "timestamp":"2025-05-17 00:16:52.408867066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.409 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.409 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.409 [INFO][4034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.439 [INFO][4034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.472 [INFO][4034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.477 [INFO][4034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.479 [INFO][4034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.481 [INFO][4034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.481 [INFO][4034] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.483 [INFO][4034] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.488 [INFO][4034] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.506 [INFO][4034] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.506 [INFO][4034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" host="localhost" May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.506 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:16:52.641185 containerd[1462]: 2025-05-17 00:16:52.506 [INFO][4034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" HandleID="k8s-pod-network.4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:52.642123 containerd[1462]: 2025-05-17 00:16:52.509 [INFO][4020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9cgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be1c8442-ea83-4a8e-9428-f2f62d4e4acf", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9cgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b70ee9f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:16:52.642123 containerd[1462]: 2025-05-17 00:16:52.509 [INFO][4020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:52.642123 containerd[1462]: 2025-05-17 00:16:52.509 [INFO][4020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d4b70ee9f6 ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:52.642123 containerd[1462]: 2025-05-17 00:16:52.521 [INFO][4020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:52.642123 containerd[1462]: 2025-05-17 00:16:52.521 [INFO][4020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9cgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be1c8442-ea83-4a8e-9428-f2f62d4e4acf", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f", Pod:"csi-node-driver-9cgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b70ee9f6", MAC:"4e:49:08:6d:4f:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:16:52.642123 containerd[1462]: 2025-05-17 00:16:52.637 [INFO][4020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f" Namespace="calico-system" Pod="csi-node-driver-9cgfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:16:52.672862 systemd-networkd[1397]: calid8c27c6d87c: Link UP May 17 00:16:52.674468 systemd-networkd[1397]: calid8c27c6d87c: Gained carrier May 17 00:16:52.683417 containerd[1462]: time="2025-05-17T00:16:52.682824437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:52.683417 containerd[1462]: time="2025-05-17T00:16:52.683388216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:52.683417 containerd[1462]: time="2025-05-17T00:16:52.683400940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:52.683592 containerd[1462]: time="2025-05-17T00:16:52.683472925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:52.720436 systemd[1]: Started cri-containerd-4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f.scope - libcontainer container 4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f. May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.425 [INFO][4042] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.472 [INFO][4042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0 whisker-5ff7b45b78- calico-system 0222fa69-3882-4194-90c7-5cf5983ab063 993 0 2025-05-17 00:16:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5ff7b45b78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5ff7b45b78-h4g9j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid8c27c6d87c [] [] }} ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.472 [INFO][4042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.498 [INFO][4057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.498 [INFO][4057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5ff7b45b78-h4g9j", "timestamp":"2025-05-17 00:16:52.498804478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.499 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.506 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.506 [INFO][4057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.637 [INFO][4057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.643 [INFO][4057] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.647 [INFO][4057] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.648 [INFO][4057] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.650 [INFO][4057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.650 [INFO][4057] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.651 [INFO][4057] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.656 [INFO][4057] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.665 [INFO][4057] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.665 [INFO][4057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" host="localhost" May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.665 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:16:52.721972 containerd[1462]: 2025-05-17 00:16:52.665 [INFO][4057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:52.722537 containerd[1462]: 2025-05-17 00:16:52.669 [INFO][4042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0", GenerateName:"whisker-5ff7b45b78-", Namespace:"calico-system", SelfLink:"", UID:"0222fa69-3882-4194-90c7-5cf5983ab063", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ff7b45b78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5ff7b45b78-h4g9j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8c27c6d87c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:16:52.722537 containerd[1462]: 2025-05-17 00:16:52.669 [INFO][4042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:52.722537 containerd[1462]: 2025-05-17 00:16:52.669 [INFO][4042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8c27c6d87c ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:52.722537 containerd[1462]: 2025-05-17 00:16:52.672 [INFO][4042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:52.722537 containerd[1462]: 2025-05-17 00:16:52.673 [INFO][4042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0", GenerateName:"whisker-5ff7b45b78-", Namespace:"calico-system", SelfLink:"", UID:"0222fa69-3882-4194-90c7-5cf5983ab063", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ff7b45b78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e", Pod:"whisker-5ff7b45b78-h4g9j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8c27c6d87c", MAC:"8a:55:03:0a:af:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:16:52.722537 containerd[1462]: 2025-05-17 00:16:52.715 [INFO][4042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Namespace="calico-system" Pod="whisker-5ff7b45b78-h4g9j" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:16:52.749989 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:16:52.769478 containerd[1462]: time="2025-05-17T00:16:52.769428807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9cgfz,Uid:be1c8442-ea83-4a8e-9428-f2f62d4e4acf,Namespace:calico-system,Attempt:1,} returns sandbox id \"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f\"" May 17 00:16:52.771545 containerd[1462]: time="2025-05-17T00:16:52.771505338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:16:52.893520 containerd[1462]: time="2025-05-17T00:16:52.893449767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:16:52.893885 containerd[1462]: time="2025-05-17T00:16:52.893768655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:16:52.895084 containerd[1462]: time="2025-05-17T00:16:52.894993216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:52.896262 containerd[1462]: time="2025-05-17T00:16:52.896229257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:16:52.921803 systemd[1]: Started cri-containerd-564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e.scope - libcontainer container 564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e. May 17 00:16:52.934026 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:16:52.934781 kernel: bpftool[4280]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:16:52.959966 containerd[1462]: time="2025-05-17T00:16:52.959916116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff7b45b78-h4g9j,Uid:0222fa69-3882-4194-90c7-5cf5983ab063,Namespace:calico-system,Attempt:1,} returns sandbox id \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\"" May 17 00:16:53.169029 systemd-networkd[1397]: vxlan.calico: Link UP May 17 00:16:53.169039 systemd-networkd[1397]: vxlan.calico: Gained carrier May 17 00:16:53.902853 systemd-networkd[1397]: calid8c27c6d87c: Gained IPv6LL May 17 00:16:54.030800 systemd-networkd[1397]: cali7d4b70ee9f6: Gained IPv6LL May 17 00:16:54.114402 systemd[1]: Started sshd@9-10.0.0.66:22-10.0.0.1:33870.service - OpenSSH per-connection server daemon (10.0.0.1:33870). May 17 00:16:54.154233 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 33870 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:54.155912 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:54.159954 systemd-logind[1446]: New session 10 of user core. May 17 00:16:54.167831 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:16:54.334625 sshd[4365]: pam_unix(sshd:session): session closed for user core May 17 00:16:54.338437 systemd[1]: sshd@9-10.0.0.66:22-10.0.0.1:33870.service: Deactivated successfully. May 17 00:16:54.340463 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:16:54.341375 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. May 17 00:16:54.342363 systemd-logind[1446]: Removed session 10. May 17 00:16:54.478848 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL May 17 00:16:56.202141 containerd[1462]: time="2025-05-17T00:16:56.202082811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:56.244790 containerd[1462]: time="2025-05-17T00:16:56.244719469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:16:56.273147 containerd[1462]: time="2025-05-17T00:16:56.273113048Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:56.318179 containerd[1462]: time="2025-05-17T00:16:56.318098457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:56.318693 containerd[1462]: time="2025-05-17T00:16:56.318642087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 3.547101613s" May 17 00:16:56.318747 containerd[1462]: time="2025-05-17T00:16:56.318705586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:16:56.320048 containerd[1462]: time="2025-05-17T00:16:56.320015386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:16:56.321164 containerd[1462]: time="2025-05-17T00:16:56.321123488Z" level=info msg="CreateContainer within sandbox \"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:16:56.522442 containerd[1462]: time="2025-05-17T00:16:56.522327278Z" level=info msg="CreateContainer within sandbox \"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d3c6593a6932ff21aa505dd2b690fa4919bf32acb136cef9a2c237d128336291\"" May 17 00:16:56.522808 containerd[1462]: time="2025-05-17T00:16:56.522779918Z" level=info msg="StartContainer for \"d3c6593a6932ff21aa505dd2b690fa4919bf32acb136cef9a2c237d128336291\"" May 17 00:16:56.554823 systemd[1]: Started cri-containerd-d3c6593a6932ff21aa505dd2b690fa4919bf32acb136cef9a2c237d128336291.scope - libcontainer container d3c6593a6932ff21aa505dd2b690fa4919bf32acb136cef9a2c237d128336291. May 17 00:16:56.597937 containerd[1462]: time="2025-05-17T00:16:56.597841787Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:56.641915 containerd[1462]: time="2025-05-17T00:16:56.641869506Z" level=info msg="StartContainer for \"d3c6593a6932ff21aa505dd2b690fa4919bf32acb136cef9a2c237d128336291\" returns successfully" May 17 00:16:56.647625 containerd[1462]: time="2025-05-17T00:16:56.647588166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:56.647746 containerd[1462]: time="2025-05-17T00:16:56.647628381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:16:56.647774 kubelet[2482]: E0517 00:16:56.647748 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:16:56.648187 kubelet[2482]: E0517 00:16:56.647790 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:16:56.648345 containerd[1462]: time="2025-05-17T00:16:56.648268463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:16:56.648963 kubelet[2482]: E0517 00:16:56.648910 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1ce88814e3ca4076bf6dce8934cc9708,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56v7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ff7b45b78-h4g9j_calico-system(0222fa69-3882-4194-90c7-5cf5983ab063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:59.349131 systemd[1]: Started sshd@10-10.0.0.66:22-10.0.0.1:32998.service - OpenSSH per-connection server daemon (10.0.0.1:32998). May 17 00:16:59.388968 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 32998 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:16:59.390618 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:16:59.394898 systemd-logind[1446]: New session 11 of user core. May 17 00:16:59.411819 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:16:59.501734 containerd[1462]: time="2025-05-17T00:16:59.501567799Z" level=info msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\"" May 17 00:16:59.529827 containerd[1462]: time="2025-05-17T00:16:59.529763467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:59.556877 sshd[4435]: pam_unix(sshd:session): session closed for user core May 17 00:16:59.561056 systemd[1]: sshd@10-10.0.0.66:22-10.0.0.1:32998.service: Deactivated successfully. May 17 00:16:59.563146 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:16:59.563842 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. May 17 00:16:59.564787 systemd-logind[1446]: Removed session 11. May 17 00:16:59.567106 containerd[1462]: time="2025-05-17T00:16:59.567041908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:16:59.608755 containerd[1462]: time="2025-05-17T00:16:59.608552735Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:59.627591 containerd[1462]: time="2025-05-17T00:16:59.627534845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.593 [INFO][4458] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.594 [INFO][4458] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" iface="eth0" netns="/var/run/netns/cni-8630567a-2087-9ab9-a2ad-b511a06dbe61" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.594 [INFO][4458] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" iface="eth0" netns="/var/run/netns/cni-8630567a-2087-9ab9-a2ad-b511a06dbe61" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.594 [INFO][4458] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" iface="eth0" netns="/var/run/netns/cni-8630567a-2087-9ab9-a2ad-b511a06dbe61" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.594 [INFO][4458] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.594 [INFO][4458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.614 [INFO][4469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.614 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.614 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.619 [WARNING][4469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.619 [INFO][4469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.621 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:16:59.628053 containerd[1462]: 2025-05-17 00:16:59.624 [INFO][4458] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:16:59.628475 containerd[1462]: time="2025-05-17T00:16:59.628362298Z" level=info msg="TearDown network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" successfully" May 17 00:16:59.628475 containerd[1462]: time="2025-05-17T00:16:59.628393727Z" level=info msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" returns successfully" May 17 00:16:59.628983 containerd[1462]: time="2025-05-17T00:16:59.628525775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.980220653s" May 17 00:16:59.628983 containerd[1462]: time="2025-05-17T00:16:59.628566251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:16:59.631084 containerd[1462]: time="2025-05-17T00:16:59.631026029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:16:59.631335 containerd[1462]: time="2025-05-17T00:16:59.631287851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-9ljdj,Uid:54da1e60-c26d-45aa-84ac-d213e8845274,Namespace:calico-apiserver,Attempt:1,}" May 17 00:16:59.631822 systemd[1]: run-netns-cni\x2d8630567a\x2d2087\x2d9ab9\x2da2ad\x2db511a06dbe61.mount: Deactivated successfully. May 17 00:16:59.632083 containerd[1462]: time="2025-05-17T00:16:59.632052827Z" level=info msg="CreateContainer within sandbox \"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:16:59.890237 containerd[1462]: time="2025-05-17T00:16:59.890090367Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:00.005934 containerd[1462]: time="2025-05-17T00:17:00.005841962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:00.005934 containerd[1462]: time="2025-05-17T00:17:00.005900912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:17:00.006252 kubelet[2482]: E0517 00:17:00.006136 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:00.006252 kubelet[2482]: E0517 00:17:00.006193 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:00.006741 kubelet[2482]: E0517 00:17:00.006331 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56v7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ff7b45b78-h4g9j_calico-system(0222fa69-3882-4194-90c7-5cf5983ab063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:00.007614 kubelet[2482]: E0517 00:17:00.007527 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5ff7b45b78-h4g9j" podUID="0222fa69-3882-4194-90c7-5cf5983ab063" May 17 00:17:00.158982 containerd[1462]: time="2025-05-17T00:17:00.158844711Z" level=info msg="CreateContainer within sandbox \"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"38d3918cef09d576be9de5934d6fe75ea72c9bd1e80bd77ca7a6fa2ece5bdabe\"" May 17 00:17:00.161284 containerd[1462]: time="2025-05-17T00:17:00.159387931Z" level=info msg="StartContainer for \"38d3918cef09d576be9de5934d6fe75ea72c9bd1e80bd77ca7a6fa2ece5bdabe\"" May 17 00:17:00.187830 systemd[1]: Started cri-containerd-38d3918cef09d576be9de5934d6fe75ea72c9bd1e80bd77ca7a6fa2ece5bdabe.scope - libcontainer container 38d3918cef09d576be9de5934d6fe75ea72c9bd1e80bd77ca7a6fa2ece5bdabe. May 17 00:17:00.358409 containerd[1462]: time="2025-05-17T00:17:00.358352508Z" level=info msg="StartContainer for \"38d3918cef09d576be9de5934d6fe75ea72c9bd1e80bd77ca7a6fa2ece5bdabe\" returns successfully" May 17 00:17:00.502605 containerd[1462]: time="2025-05-17T00:17:00.502452040Z" level=info msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\"" May 17 00:17:00.534131 systemd-networkd[1397]: cali8527b3d1f89: Link UP May 17 00:17:00.534992 systemd-networkd[1397]: cali8527b3d1f89: Gained carrier May 17 00:17:00.585531 kubelet[2482]: I0517 00:17:00.585493 2482 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:17:00.585531 kubelet[2482]: I0517 00:17:00.585527 2482 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.205 [INFO][4478] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0 calico-apiserver-67f459565f- calico-apiserver 54da1e60-c26d-45aa-84ac-d213e8845274 1054 0 2025-05-17 00:16:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f459565f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67f459565f-9ljdj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8527b3d1f89 [] [] }} ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.206 [INFO][4478] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.316 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" HandleID="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.317 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" HandleID="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ead0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67f459565f-9ljdj", "timestamp":"2025-05-17 00:17:00.316904542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.317 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.317 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.317 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.323 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.327 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.330 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.331 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.333 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.333 [INFO][4529] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.334 [INFO][4529] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173 May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.376 [INFO][4529] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.529 [INFO][4529] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.529 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" host="localhost" May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.529 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:00.742177 containerd[1462]: 2025-05-17 00:17:00.529 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" HandleID="k8s-pod-network.6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:00.743530 containerd[1462]: 2025-05-17 00:17:00.532 [INFO][4478] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"54da1e60-c26d-45aa-84ac-d213e8845274", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67f459565f-9ljdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8527b3d1f89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:00.743530 containerd[1462]: 2025-05-17 00:17:00.532 [INFO][4478] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:00.743530 containerd[1462]: 2025-05-17 00:17:00.532 [INFO][4478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8527b3d1f89 ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:00.743530 containerd[1462]: 2025-05-17 00:17:00.534 [INFO][4478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:00.743530 containerd[1462]: 2025-05-17 00:17:00.535 [INFO][4478] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"54da1e60-c26d-45aa-84ac-d213e8845274", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173", Pod:"calico-apiserver-67f459565f-9ljdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8527b3d1f89", MAC:"b6:2a:8d:69:7f:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:00.743530 containerd[1462]: 2025-05-17 00:17:00.738 [INFO][4478] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-9ljdj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:00.776089 containerd[1462]: time="2025-05-17T00:17:00.775946018Z" level=info msg="StopPodSandbox for \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\"" May 17 00:17:00.779518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e-shm.mount: Deactivated successfully. May 17 00:17:00.785052 systemd[1]: cri-containerd-564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e.scope: Deactivated successfully. May 17 00:17:00.809795 containerd[1462]: time="2025-05-17T00:17:00.809667627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:00.809934 containerd[1462]: time="2025-05-17T00:17:00.809784637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:00.809934 containerd[1462]: time="2025-05-17T00:17:00.809802991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:00.809934 containerd[1462]: time="2025-05-17T00:17:00.809900284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:00.815726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e-rootfs.mount: Deactivated successfully. May 17 00:17:00.821905 containerd[1462]: time="2025-05-17T00:17:00.816828571Z" level=info msg="shim disconnected" id=564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e namespace=k8s.io May 17 00:17:00.821905 containerd[1462]: time="2025-05-17T00:17:00.817058914Z" level=warning msg="cleaning up after shim disconnected" id=564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e namespace=k8s.io May 17 00:17:00.821905 containerd[1462]: time="2025-05-17T00:17:00.817071318Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:17:00.840833 systemd[1]: Started cri-containerd-6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173.scope - libcontainer container 6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173. May 17 00:17:00.855409 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:00.884145 containerd[1462]: time="2025-05-17T00:17:00.884095619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-9ljdj,Uid:54da1e60-c26d-45aa-84ac-d213e8845274,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173\"" May 17 00:17:00.885730 containerd[1462]: time="2025-05-17T00:17:00.885669023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.959 [INFO][4548] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.960 [INFO][4548] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" iface="eth0" netns="/var/run/netns/cni-52198cf6-9393-6719-075a-b3eb6bf575c9" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.961 [INFO][4548] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" iface="eth0" netns="/var/run/netns/cni-52198cf6-9393-6719-075a-b3eb6bf575c9" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.961 [INFO][4548] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" iface="eth0" netns="/var/run/netns/cni-52198cf6-9393-6719-075a-b3eb6bf575c9" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.961 [INFO][4548] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.961 [INFO][4548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.980 [INFO][4640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.980 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:00.980 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:01.265 [WARNING][4640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:01.265 [INFO][4640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:01.407 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:01.415555 containerd[1462]: 2025-05-17 00:17:01.411 [INFO][4548] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:01.416446 containerd[1462]: time="2025-05-17T00:17:01.415920425Z" level=info msg="TearDown network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" successfully" May 17 00:17:01.416446 containerd[1462]: time="2025-05-17T00:17:01.415951853Z" level=info msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" returns successfully" May 17 00:17:01.417489 containerd[1462]: time="2025-05-17T00:17:01.417453963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-8b9gs,Uid:2af7b115-8c11-4444-9b1c-fa1f02b3517f,Namespace:calico-system,Attempt:1,}" May 17 00:17:01.502205 containerd[1462]: time="2025-05-17T00:17:01.502134974Z" level=info msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\"" May 17 00:17:01.556297 kubelet[2482]: I0517 00:17:01.555807 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9cgfz" podStartSLOduration=30.69726078 podStartE2EDuration="37.555785953s" podCreationTimestamp="2025-05-17 00:16:24 +0000 UTC" firstStartedPulling="2025-05-17 00:16:52.7709369 +0000 UTC m=+46.341619656" lastFinishedPulling="2025-05-17 00:16:59.629462073 +0000 UTC m=+53.200144829" observedRunningTime="2025-05-17 00:17:01.440442891 +0000 UTC m=+55.011125657" watchObservedRunningTime="2025-05-17 00:17:01.555785953 +0000 UTC m=+55.126468709" May 17 00:17:01.557655 systemd-networkd[1397]: calid8c27c6d87c: Link DOWN May 17 00:17:01.557664 systemd-networkd[1397]: calid8c27c6d87c: Lost carrier May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.568 [INFO][4680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.569 [INFO][4680] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" iface="eth0" netns="/var/run/netns/cni-322286b8-d3ce-757f-5d67-a2f5a8200e42" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.569 [INFO][4680] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" iface="eth0" netns="/var/run/netns/cni-322286b8-d3ce-757f-5d67-a2f5a8200e42" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.569 [INFO][4680] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" iface="eth0" netns="/var/run/netns/cni-322286b8-d3ce-757f-5d67-a2f5a8200e42" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.569 [INFO][4680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.570 [INFO][4680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.592 [INFO][4694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.592 [INFO][4694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.592 [INFO][4694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.598 [WARNING][4694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.598 [INFO][4694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.599 [INFO][4694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:01.606417 containerd[1462]: 2025-05-17 00:17:01.602 [INFO][4680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:01.607376 containerd[1462]: time="2025-05-17T00:17:01.606598383Z" level=info msg="TearDown network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" successfully" May 17 00:17:01.607376 containerd[1462]: time="2025-05-17T00:17:01.606628540Z" level=info msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" returns successfully" May 17 00:17:01.607850 containerd[1462]: time="2025-05-17T00:17:01.607820979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-mjks8,Uid:e170e92d-6fac-4790-9e44-4b5889f835a0,Namespace:calico-apiserver,Attempt:1,}" May 17 00:17:01.637492 systemd[1]: run-netns-cni\x2d322286b8\x2dd3ce\x2d757f\x2d5d67\x2da2f5a8200e42.mount: Deactivated successfully. May 17 00:17:01.637632 systemd[1]: run-netns-cni\x2d52198cf6\x2d9393\x2d6719\x2d075a\x2db3eb6bf575c9.mount: Deactivated successfully. May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.556 [INFO][4661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.556 [INFO][4661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" iface="eth0" netns="/var/run/netns/cni-75b833e1-25c4-7853-eaf6-9b6a26a6ef00" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.556 [INFO][4661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" iface="eth0" netns="/var/run/netns/cni-75b833e1-25c4-7853-eaf6-9b6a26a6ef00" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.571 [INFO][4661] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" after=15.12276ms iface="eth0" netns="/var/run/netns/cni-75b833e1-25c4-7853-eaf6-9b6a26a6ef00" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.571 [INFO][4661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.571 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.605 [INFO][4708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.606 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.606 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.644 [INFO][4708] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.644 [INFO][4708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.645 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:01.656033 containerd[1462]: 2025-05-17 00:17:01.651 [INFO][4661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:01.656592 containerd[1462]: time="2025-05-17T00:17:01.656340174Z" level=info msg="TearDown network for sandbox \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\" successfully" May 17 00:17:01.656592 containerd[1462]: time="2025-05-17T00:17:01.656371282Z" level=info msg="StopPodSandbox for \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\" returns successfully" May 17 00:17:01.657437 containerd[1462]: time="2025-05-17T00:17:01.657067819Z" level=info msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\"" May 17 00:17:01.661399 systemd[1]: run-netns-cni\x2d75b833e1\x2d25c4\x2d7853\x2deaf6\x2d9b6a26a6ef00.mount: Deactivated successfully. May 17 00:17:01.736029 systemd-networkd[1397]: cali2f4e9f3010a: Link UP May 17 00:17:01.737434 systemd-networkd[1397]: cali2f4e9f3010a: Gained carrier May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.607 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0 goldmane-78d55f7ddc- calico-system 2af7b115-8c11-4444-9b1c-fa1f02b3517f 1067 0 2025-05-17 00:16:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-8b9gs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2f4e9f3010a [] [] }} ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.607 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.647 [INFO][4725] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" HandleID="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.647 [INFO][4725] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" HandleID="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eb010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-8b9gs", "timestamp":"2025-05-17 00:17:01.647371005 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.647 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.647 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.647 [INFO][4725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.671 [INFO][4725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.682 [INFO][4725] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.686 [INFO][4725] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.688 [INFO][4725] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.690 [INFO][4725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.690 [INFO][4725] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.693 [INFO][4725] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.716 [INFO][4725] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.727 [INFO][4725] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.727 [INFO][4725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" host="localhost" May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.727 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:01.772206 containerd[1462]: 2025-05-17 00:17:01.727 [INFO][4725] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" HandleID="k8s-pod-network.522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.773035 containerd[1462]: 2025-05-17 00:17:01.731 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"2af7b115-8c11-4444-9b1c-fa1f02b3517f", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-8b9gs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f4e9f3010a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:01.773035 containerd[1462]: 2025-05-17 00:17:01.731 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.773035 containerd[1462]: 2025-05-17 00:17:01.731 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f4e9f3010a ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.773035 containerd[1462]: 2025-05-17 00:17:01.737 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.773035 containerd[1462]: 2025-05-17 00:17:01.738 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"2af7b115-8c11-4444-9b1c-fa1f02b3517f", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b", Pod:"goldmane-78d55f7ddc-8b9gs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f4e9f3010a", MAC:"3a:dc:9d:25:b9:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:01.773035 containerd[1462]: 2025-05-17 00:17:01.768 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b" Namespace="calico-system" Pod="goldmane-78d55f7ddc-8b9gs" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:01.779860 kubelet[2482]: I0517 00:17:01.779781 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:01.957105 containerd[1462]: time="2025-05-17T00:17:01.956936382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:01.957105 containerd[1462]: time="2025-05-17T00:17:01.957035959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:01.957105 containerd[1462]: time="2025-05-17T00:17:01.957046599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:01.957337 containerd[1462]: time="2025-05-17T00:17:01.957123744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:01.983822 systemd[1]: Started cri-containerd-522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b.scope - libcontainer container 522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b. May 17 00:17:01.998232 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:02.011779 systemd-networkd[1397]: calia2c254a51b9: Link UP May 17 00:17:02.013585 systemd-networkd[1397]: calia2c254a51b9: Gained carrier May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:01.715 [WARNING][4754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0", GenerateName:"whisker-5ff7b45b78-", Namespace:"calico-system", SelfLink:"", UID:"0222fa69-3882-4194-90c7-5cf5983ab063", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ff7b45b78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e", Pod:"whisker-5ff7b45b78-h4g9j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8c27c6d87c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:01.716 [INFO][4754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:01.716 [INFO][4754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" iface="eth0" netns="" May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:01.716 [INFO][4754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:01.716 [INFO][4754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:01.743 [INFO][4774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:01.743 [INFO][4774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:02.003 [INFO][4774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:02.011 [WARNING][4774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:02.011 [INFO][4774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:02.013 [INFO][4774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:02.021636 containerd[1462]: 2025-05-17 00:17:02.016 [INFO][4754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:02.022075 containerd[1462]: time="2025-05-17T00:17:02.021657260Z" level=info msg="TearDown network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" successfully" May 17 00:17:02.022075 containerd[1462]: time="2025-05-17T00:17:02.021698858Z" level=info msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" returns successfully" May 17 00:17:02.036787 containerd[1462]: time="2025-05-17T00:17:02.036747338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-8b9gs,Uid:2af7b115-8c11-4444-9b1c-fa1f02b3517f,Namespace:calico-system,Attempt:1,} returns sandbox id \"522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b\"" May 17 00:17:02.081225 kubelet[2482]: I0517 00:17:02.081185 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56v7r\" (UniqueName: \"kubernetes.io/projected/0222fa69-3882-4194-90c7-5cf5983ab063-kube-api-access-56v7r\") pod \"0222fa69-3882-4194-90c7-5cf5983ab063\" (UID: \"0222fa69-3882-4194-90c7-5cf5983ab063\") " May 17 00:17:02.081367 kubelet[2482]: I0517 00:17:02.081236 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-backend-key-pair\") pod \"0222fa69-3882-4194-90c7-5cf5983ab063\" (UID: \"0222fa69-3882-4194-90c7-5cf5983ab063\") " May 17 00:17:02.081367 kubelet[2482]: I0517 00:17:02.081256 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-ca-bundle\") pod \"0222fa69-3882-4194-90c7-5cf5983ab063\" (UID: \"0222fa69-3882-4194-90c7-5cf5983ab063\") " May 17 00:17:02.081783 kubelet[2482]: I0517 00:17:02.081764 2482 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0222fa69-3882-4194-90c7-5cf5983ab063" (UID: "0222fa69-3882-4194-90c7-5cf5983ab063"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:17:02.085587 kubelet[2482]: I0517 00:17:02.085536 2482 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0222fa69-3882-4194-90c7-5cf5983ab063" (UID: "0222fa69-3882-4194-90c7-5cf5983ab063"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:17:02.085727 kubelet[2482]: I0517 00:17:02.085658 2482 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0222fa69-3882-4194-90c7-5cf5983ab063-kube-api-access-56v7r" (OuterVolumeSpecName: "kube-api-access-56v7r") pod "0222fa69-3882-4194-90c7-5cf5983ab063" (UID: "0222fa69-3882-4194-90c7-5cf5983ab063"). InnerVolumeSpecName "kube-api-access-56v7r". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:17:02.181838 kubelet[2482]: I0517 00:17:02.181766 2482 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 17 00:17:02.181838 kubelet[2482]: I0517 00:17:02.181797 2482 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-56v7r\" (UniqueName: \"kubernetes.io/projected/0222fa69-3882-4194-90c7-5cf5983ab063-kube-api-access-56v7r\") on node \"localhost\" DevicePath \"\"" May 17 00:17:02.181838 kubelet[2482]: I0517 00:17:02.181806 2482 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0222fa69-3882-4194-90c7-5cf5983ab063-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.680 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0 calico-apiserver-67f459565f- calico-apiserver e170e92d-6fac-4790-9e44-4b5889f835a0 1079 0 2025-05-17 00:16:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f459565f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67f459565f-mjks8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia2c254a51b9 [] [] }} ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.680 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.708 [INFO][4764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" HandleID="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.708 [INFO][4764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" HandleID="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67f459565f-mjks8", "timestamp":"2025-05-17 00:17:01.70875407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.708 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.727 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.727 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.755 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.798 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.802 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.804 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.806 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.806 [INFO][4764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.807 [INFO][4764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393 May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:01.834 [INFO][4764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:02.002 [INFO][4764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:02.003 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" host="localhost" May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:02.003 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:02.209318 containerd[1462]: 2025-05-17 00:17:02.003 [INFO][4764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" HandleID="k8s-pod-network.7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:02.210293 containerd[1462]: 2025-05-17 00:17:02.008 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e170e92d-6fac-4790-9e44-4b5889f835a0", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67f459565f-mjks8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2c254a51b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:02.210293 containerd[1462]: 2025-05-17 00:17:02.008 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:02.210293 containerd[1462]: 2025-05-17 00:17:02.008 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2c254a51b9 ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:02.210293 containerd[1462]: 2025-05-17 00:17:02.014 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:02.210293 containerd[1462]: 2025-05-17 00:17:02.014 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e170e92d-6fac-4790-9e44-4b5889f835a0", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393", Pod:"calico-apiserver-67f459565f-mjks8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2c254a51b9", MAC:"7e:cb:4c:da:39:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:02.210293 containerd[1462]: 2025-05-17 00:17:02.205 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393" Namespace="calico-apiserver" Pod="calico-apiserver-67f459565f-mjks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:02.231656 containerd[1462]: time="2025-05-17T00:17:02.231566773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:02.231656 containerd[1462]: time="2025-05-17T00:17:02.231627657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:02.231656 containerd[1462]: time="2025-05-17T00:17:02.231651181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:02.231869 containerd[1462]: time="2025-05-17T00:17:02.231787667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:02.254820 systemd[1]: Started cri-containerd-7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393.scope - libcontainer container 7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393. May 17 00:17:02.267912 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:02.292578 containerd[1462]: time="2025-05-17T00:17:02.292524271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f459565f-mjks8,Uid:e170e92d-6fac-4790-9e44-4b5889f835a0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393\"" May 17 00:17:02.350867 systemd-networkd[1397]: cali8527b3d1f89: Gained IPv6LL May 17 00:17:02.505319 containerd[1462]: time="2025-05-17T00:17:02.505080231Z" level=info msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\"" May 17 00:17:02.511449 systemd[1]: Removed slice kubepods-besteffort-pod0222fa69_3882_4194_90c7_5cf5983ab063.slice - libcontainer container kubepods-besteffort-pod0222fa69_3882_4194_90c7_5cf5983ab063.slice. May 17 00:17:02.632904 systemd[1]: var-lib-kubelet-pods-0222fa69\x2d3882\x2d4194\x2d90c7\x2d5cf5983ab063-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d56v7r.mount: Deactivated successfully. May 17 00:17:02.633030 systemd[1]: var-lib-kubelet-pods-0222fa69\x2d3882\x2d4194\x2d90c7\x2d5cf5983ab063-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.607 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.607 [INFO][4897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" iface="eth0" netns="/var/run/netns/cni-9c169ce6-1831-4767-62fe-6e4a7516a608" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.607 [INFO][4897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" iface="eth0" netns="/var/run/netns/cni-9c169ce6-1831-4767-62fe-6e4a7516a608" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.608 [INFO][4897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" iface="eth0" netns="/var/run/netns/cni-9c169ce6-1831-4767-62fe-6e4a7516a608" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.608 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.608 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.631 [INFO][4906] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.632 [INFO][4906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.632 [INFO][4906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.637 [WARNING][4906] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.637 [INFO][4906] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.638 [INFO][4906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:02.644716 containerd[1462]: 2025-05-17 00:17:02.641 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:02.645402 containerd[1462]: time="2025-05-17T00:17:02.644973548Z" level=info msg="TearDown network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" successfully" May 17 00:17:02.645402 containerd[1462]: time="2025-05-17T00:17:02.645000759Z" level=info msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" returns successfully" May 17 00:17:02.645452 kubelet[2482]: E0517 00:17:02.645336 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:02.646273 containerd[1462]: time="2025-05-17T00:17:02.646062733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmtqw,Uid:fbfa8f9e-2caa-4166-b768-e488cc5c9d0d,Namespace:kube-system,Attempt:1,}" May 17 00:17:02.647424 systemd[1]: run-netns-cni\x2d9c169ce6\x2d1831\x2d4767\x2d62fe\x2d6e4a7516a608.mount: Deactivated successfully. May 17 00:17:02.798818 systemd-networkd[1397]: cali2f4e9f3010a: Gained IPv6LL May 17 00:17:03.369582 systemd[1]: Created slice kubepods-besteffort-podb5f93af1_14f8_4c4c_9d7c_56660fb8cf64.slice - libcontainer container kubepods-besteffort-podb5f93af1_14f8_4c4c_9d7c_56660fb8cf64.slice. May 17 00:17:03.438811 systemd-networkd[1397]: calia2c254a51b9: Gained IPv6LL May 17 00:17:03.490034 kubelet[2482]: I0517 00:17:03.489990 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxskl\" (UniqueName: \"kubernetes.io/projected/b5f93af1-14f8-4c4c-9d7c-56660fb8cf64-kube-api-access-wxskl\") pod \"whisker-94b97b964-84zct\" (UID: \"b5f93af1-14f8-4c4c-9d7c-56660fb8cf64\") " pod="calico-system/whisker-94b97b964-84zct" May 17 00:17:03.490034 kubelet[2482]: I0517 00:17:03.490038 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f93af1-14f8-4c4c-9d7c-56660fb8cf64-whisker-ca-bundle\") pod \"whisker-94b97b964-84zct\" (UID: \"b5f93af1-14f8-4c4c-9d7c-56660fb8cf64\") " pod="calico-system/whisker-94b97b964-84zct" May 17 00:17:03.490207 kubelet[2482]: I0517 00:17:03.490063 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b5f93af1-14f8-4c4c-9d7c-56660fb8cf64-whisker-backend-key-pair\") pod \"whisker-94b97b964-84zct\" (UID: \"b5f93af1-14f8-4c4c-9d7c-56660fb8cf64\") " pod="calico-system/whisker-94b97b964-84zct" May 17 00:17:03.546946 systemd-networkd[1397]: cali24f1ea739e9: Link UP May 17 00:17:03.547722 systemd-networkd[1397]: cali24f1ea739e9: Gained carrier May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.347 [INFO][4917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0 coredns-668d6bf9bc- kube-system fbfa8f9e-2caa-4166-b768-e488cc5c9d0d 1099 0 2025-05-17 00:16:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vmtqw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24f1ea739e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.347 [INFO][4917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.497 [INFO][4934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" HandleID="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.497 [INFO][4934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" HandleID="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e970), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vmtqw", "timestamp":"2025-05-17 00:17:03.49714652 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.497 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.497 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.497 [INFO][4934] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.503 [INFO][4934] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.506 [INFO][4934] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.509 [INFO][4934] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.511 [INFO][4934] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.513 [INFO][4934] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.513 [INFO][4934] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.514 [INFO][4934] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80 May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.525 [INFO][4934] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.542 [INFO][4934] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.542 [INFO][4934] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" host="localhost" May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.542 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:03.575281 containerd[1462]: 2025-05-17 00:17:03.542 [INFO][4934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" HandleID="k8s-pod-network.fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:03.575794 containerd[1462]: 2025-05-17 00:17:03.545 [INFO][4917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vmtqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24f1ea739e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:03.575794 containerd[1462]: 2025-05-17 00:17:03.545 [INFO][4917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:03.575794 containerd[1462]: 2025-05-17 00:17:03.545 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24f1ea739e9 ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:03.575794 containerd[1462]: 2025-05-17 00:17:03.547 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:03.575794 containerd[1462]: 2025-05-17 00:17:03.549 [INFO][4917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80", Pod:"coredns-668d6bf9bc-vmtqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24f1ea739e9", MAC:"e2:26:a0:5c:01:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:03.575794 containerd[1462]: 2025-05-17 00:17:03.570 [INFO][4917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmtqw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:03.639443 containerd[1462]: time="2025-05-17T00:17:03.638719417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:03.640417 containerd[1462]: time="2025-05-17T00:17:03.638816900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:03.640417 containerd[1462]: time="2025-05-17T00:17:03.638836607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:03.640417 containerd[1462]: time="2025-05-17T00:17:03.638958466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:03.673466 containerd[1462]: time="2025-05-17T00:17:03.673360803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94b97b964-84zct,Uid:b5f93af1-14f8-4c4c-9d7c-56660fb8cf64,Namespace:calico-system,Attempt:0,}" May 17 00:17:03.673874 systemd[1]: Started cri-containerd-fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80.scope - libcontainer container fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80. May 17 00:17:03.689322 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:03.720402 containerd[1462]: time="2025-05-17T00:17:03.720348804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmtqw,Uid:fbfa8f9e-2caa-4166-b768-e488cc5c9d0d,Namespace:kube-system,Attempt:1,} returns sandbox id \"fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80\"" May 17 00:17:03.721047 kubelet[2482]: E0517 00:17:03.721001 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:03.732377 containerd[1462]: time="2025-05-17T00:17:03.732337208Z" level=info msg="CreateContainer within sandbox \"fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:17:04.502489 containerd[1462]: time="2025-05-17T00:17:04.502400253Z" level=info msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\"" May 17 00:17:04.503132 containerd[1462]: time="2025-05-17T00:17:04.502758927Z" level=info msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\"" May 17 00:17:04.504344 kubelet[2482]: I0517 00:17:04.504306 2482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0222fa69-3882-4194-90c7-5cf5983ab063" path="/var/lib/kubelet/pods/0222fa69-3882-4194-90c7-5cf5983ab063/volumes" May 17 00:17:04.574462 systemd[1]: Started sshd@11-10.0.0.66:22-10.0.0.1:33014.service - OpenSSH per-connection server daemon (10.0.0.1:33014). May 17 00:17:04.614300 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 33014 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:04.616251 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:04.620282 systemd-logind[1446]: New session 12 of user core. May 17 00:17:04.627808 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:17:04.830797 sshd[5038]: pam_unix(sshd:session): session closed for user core May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.792 [INFO][5019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.792 [INFO][5019] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" iface="eth0" netns="/var/run/netns/cni-9df81234-4f0e-88dd-fb8e-7314185fd3c6" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.793 [INFO][5019] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" iface="eth0" netns="/var/run/netns/cni-9df81234-4f0e-88dd-fb8e-7314185fd3c6" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.793 [INFO][5019] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" iface="eth0" netns="/var/run/netns/cni-9df81234-4f0e-88dd-fb8e-7314185fd3c6" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.793 [INFO][5019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.793 [INFO][5019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.816 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.816 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.816 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.822 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.822 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.825 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:04.831262 containerd[1462]: 2025-05-17 00:17:04.828 [INFO][5019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:04.832995 containerd[1462]: time="2025-05-17T00:17:04.832428732Z" level=info msg="TearDown network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" successfully" May 17 00:17:04.832995 containerd[1462]: time="2025-05-17T00:17:04.832474919Z" level=info msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" returns successfully" May 17 00:17:04.833606 containerd[1462]: time="2025-05-17T00:17:04.833586536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd784c9b6-wxxjz,Uid:a14251ec-79d4-49b3-94ab-87e70e4faa0d,Namespace:calico-system,Attempt:1,}" May 17 00:17:04.844457 systemd[1]: run-netns-cni\x2d9df81234\x2d4f0e\x2d88dd\x2dfb8e\x2d7314185fd3c6.mount: Deactivated successfully. May 17 00:17:04.846012 systemd[1]: sshd@11-10.0.0.66:22-10.0.0.1:33014.service: Deactivated successfully. May 17 00:17:04.847610 systemd-networkd[1397]: cali24f1ea739e9: Gained IPv6LL May 17 00:17:04.849605 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:17:04.851539 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. May 17 00:17:04.858593 systemd[1]: Started sshd@12-10.0.0.66:22-10.0.0.1:33018.service - OpenSSH per-connection server daemon (10.0.0.1:33018). May 17 00:17:04.860597 systemd-logind[1446]: Removed session 12. May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.822 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.822 [INFO][5020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" iface="eth0" netns="/var/run/netns/cni-6e27eef6-103b-f025-0905-a10b6684e175" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.823 [INFO][5020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" iface="eth0" netns="/var/run/netns/cni-6e27eef6-103b-f025-0905-a10b6684e175" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.824 [INFO][5020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" iface="eth0" netns="/var/run/netns/cni-6e27eef6-103b-f025-0905-a10b6684e175" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.824 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.824 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.850 [INFO][5062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.850 [INFO][5062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.851 [INFO][5062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.856 [WARNING][5062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.857 [INFO][5062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.860 [INFO][5062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:04.866567 containerd[1462]: 2025-05-17 00:17:04.863 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:04.870081 containerd[1462]: time="2025-05-17T00:17:04.866888494Z" level=info msg="TearDown network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" successfully" May 17 00:17:04.870081 containerd[1462]: time="2025-05-17T00:17:04.866924802Z" level=info msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" returns successfully" May 17 00:17:04.870081 containerd[1462]: time="2025-05-17T00:17:04.867660463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xdkt4,Uid:2864e22f-90c3-40b7-81ed-054edc334c43,Namespace:kube-system,Attempt:1,}" May 17 00:17:04.869499 systemd[1]: run-netns-cni\x2d6e27eef6\x2d103b\x2df025\x2d0905\x2da10b6684e175.mount: Deactivated successfully. May 17 00:17:04.870212 kubelet[2482]: E0517 00:17:04.867234 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:04.892259 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 33018 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:04.893961 sshd[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:04.899143 systemd-logind[1446]: New session 13 of user core. May 17 00:17:04.903822 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:17:05.188490 systemd[1]: Started sshd@13-10.0.0.66:22-10.0.0.1:33022.service - OpenSSH per-connection server daemon (10.0.0.1:33022). May 17 00:17:05.190895 sshd[5078]: pam_unix(sshd:session): session closed for user core May 17 00:17:05.198762 systemd[1]: sshd@12-10.0.0.66:22-10.0.0.1:33018.service: Deactivated successfully. May 17 00:17:05.201326 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:17:05.211533 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. May 17 00:17:05.213534 systemd-logind[1446]: Removed session 13. May 17 00:17:05.216796 containerd[1462]: time="2025-05-17T00:17:05.216641339Z" level=info msg="CreateContainer within sandbox \"fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2aa3ea595da7c19a1cc6c6ef5c49cee39280a3d8bb3e2866271813720125b877\"" May 17 00:17:05.220109 containerd[1462]: time="2025-05-17T00:17:05.219946884Z" level=info msg="StartContainer for \"2aa3ea595da7c19a1cc6c6ef5c49cee39280a3d8bb3e2866271813720125b877\"" May 17 00:17:05.258083 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 33022 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:05.259543 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:05.268531 systemd[1]: Started cri-containerd-2aa3ea595da7c19a1cc6c6ef5c49cee39280a3d8bb3e2866271813720125b877.scope - libcontainer container 2aa3ea595da7c19a1cc6c6ef5c49cee39280a3d8bb3e2866271813720125b877. May 17 00:17:05.276830 systemd-logind[1446]: New session 14 of user core. May 17 00:17:05.280180 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:17:05.414854 containerd[1462]: time="2025-05-17T00:17:05.414717946Z" level=info msg="StartContainer for \"2aa3ea595da7c19a1cc6c6ef5c49cee39280a3d8bb3e2866271813720125b877\" returns successfully" May 17 00:17:05.436344 systemd-networkd[1397]: calia0c3bd7a2a6: Link UP May 17 00:17:05.437584 systemd-networkd[1397]: calia0c3bd7a2a6: Gained carrier May 17 00:17:05.455648 sshd[5089]: pam_unix(sshd:session): session closed for user core May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.253 [INFO][5091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--94b97b964--84zct-eth0 whisker-94b97b964- calico-system b5f93af1-14f8-4c4c-9d7c-56660fb8cf64 1117 0 2025-05-17 00:17:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:94b97b964 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-94b97b964-84zct eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia0c3bd7a2a6 [] [] }} ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.253 [INFO][5091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-eth0" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.292 [INFO][5149] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" HandleID="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Workload="localhost-k8s-whisker--94b97b964--84zct-eth0" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.292 [INFO][5149] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" HandleID="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Workload="localhost-k8s-whisker--94b97b964--84zct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-94b97b964-84zct", "timestamp":"2025-05-17 00:17:05.292480154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.292 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.292 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.293 [INFO][5149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.308 [INFO][5149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.315 [INFO][5149] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.319 [INFO][5149] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.321 [INFO][5149] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.323 [INFO][5149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.323 [INFO][5149] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.324 [INFO][5149] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.379 [INFO][5149] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.416 [INFO][5149] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.416 [INFO][5149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" host="localhost" May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.416 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:05.463479 containerd[1462]: 2025-05-17 00:17:05.416 [INFO][5149] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" HandleID="k8s-pod-network.ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Workload="localhost-k8s-whisker--94b97b964--84zct-eth0" May 17 00:17:05.464146 containerd[1462]: 2025-05-17 00:17:05.431 [INFO][5091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--94b97b964--84zct-eth0", GenerateName:"whisker-94b97b964-", Namespace:"calico-system", SelfLink:"", UID:"b5f93af1-14f8-4c4c-9d7c-56660fb8cf64", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"94b97b964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-94b97b964-84zct", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia0c3bd7a2a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:05.464146 containerd[1462]: 2025-05-17 00:17:05.431 [INFO][5091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-eth0" May 17 00:17:05.464146 containerd[1462]: 2025-05-17 00:17:05.431 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0c3bd7a2a6 ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-eth0" May 17 00:17:05.464146 containerd[1462]: 2025-05-17 00:17:05.438 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-eth0" May 17 00:17:05.464146 containerd[1462]: 2025-05-17 00:17:05.439 [INFO][5091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--94b97b964--84zct-eth0", GenerateName:"whisker-94b97b964-", Namespace:"calico-system", SelfLink:"", UID:"b5f93af1-14f8-4c4c-9d7c-56660fb8cf64", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"94b97b964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a", Pod:"whisker-94b97b964-84zct", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia0c3bd7a2a6", MAC:"7e:ad:52:1c:6e:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:05.464146 containerd[1462]: 2025-05-17 00:17:05.457 [INFO][5091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a" Namespace="calico-system" Pod="whisker-94b97b964-84zct" WorkloadEndpoint="localhost-k8s-whisker--94b97b964--84zct-eth0" May 17 00:17:05.464986 systemd[1]: sshd@13-10.0.0.66:22-10.0.0.1:33022.service: Deactivated successfully. May 17 00:17:05.468247 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:17:05.470804 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. May 17 00:17:05.472365 systemd-logind[1446]: Removed session 14. May 17 00:17:05.493578 containerd[1462]: time="2025-05-17T00:17:05.493470450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:05.494865 containerd[1462]: time="2025-05-17T00:17:05.493587991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:05.494865 containerd[1462]: time="2025-05-17T00:17:05.493616514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:05.494865 containerd[1462]: time="2025-05-17T00:17:05.493778989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:05.501519 systemd-networkd[1397]: cali6e364d3b92e: Link UP May 17 00:17:05.502684 systemd-networkd[1397]: cali6e364d3b92e: Gained carrier May 17 00:17:05.513169 systemd[1]: Started cri-containerd-ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a.scope - libcontainer container ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a. May 17 00:17:05.529020 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:05.557129 containerd[1462]: time="2025-05-17T00:17:05.557085421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94b97b964-84zct,Uid:b5f93af1-14f8-4c4c-9d7c-56660fb8cf64,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef9c9e93f679c8a27c3334b379ed2db37893122a97ca255b1982797eed80cc5a\"" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.295 [INFO][5117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0 calico-kube-controllers-7cd784c9b6- calico-system a14251ec-79d4-49b3-94ab-87e70e4faa0d 1132 0 2025-05-17 00:16:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cd784c9b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cd784c9b6-wxxjz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6e364d3b92e [] [] }} ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.296 [INFO][5117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.349 [INFO][5169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" HandleID="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.349 [INFO][5169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" HandleID="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cd784c9b6-wxxjz", "timestamp":"2025-05-17 00:17:05.349216705 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.349 [INFO][5169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.418 [INFO][5169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.419 [INFO][5169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.434 [INFO][5169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.443 [INFO][5169] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.456 [INFO][5169] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.459 [INFO][5169] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.463 [INFO][5169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.463 [INFO][5169] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.465 [INFO][5169] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.471 [INFO][5169] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.486 [INFO][5169] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.486 [INFO][5169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" host="localhost" May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.486 [INFO][5169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:05.640410 containerd[1462]: 2025-05-17 00:17:05.486 [INFO][5169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" HandleID="k8s-pod-network.7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:05.641844 containerd[1462]: 2025-05-17 00:17:05.492 [INFO][5117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0", GenerateName:"calico-kube-controllers-7cd784c9b6-", Namespace:"calico-system", SelfLink:"", UID:"a14251ec-79d4-49b3-94ab-87e70e4faa0d", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd784c9b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cd784c9b6-wxxjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e364d3b92e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:05.641844 containerd[1462]: 2025-05-17 00:17:05.493 [INFO][5117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:05.641844 containerd[1462]: 2025-05-17 00:17:05.493 [INFO][5117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e364d3b92e ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:05.641844 containerd[1462]: 2025-05-17 00:17:05.503 [INFO][5117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:05.641844 containerd[1462]: 2025-05-17 00:17:05.503 [INFO][5117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0", GenerateName:"calico-kube-controllers-7cd784c9b6-", Namespace:"calico-system", SelfLink:"", UID:"a14251ec-79d4-49b3-94ab-87e70e4faa0d", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd784c9b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a", Pod:"calico-kube-controllers-7cd784c9b6-wxxjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e364d3b92e", MAC:"8a:97:d5:77:13:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:05.641844 containerd[1462]: 2025-05-17 00:17:05.637 [INFO][5117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a" Namespace="calico-system" Pod="calico-kube-controllers-7cd784c9b6-wxxjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:05.668667 containerd[1462]: time="2025-05-17T00:17:05.668374564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:05.668667 containerd[1462]: time="2025-05-17T00:17:05.668630836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:05.668891 containerd[1462]: time="2025-05-17T00:17:05.668692411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:05.668922 containerd[1462]: time="2025-05-17T00:17:05.668884531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:05.687266 containerd[1462]: time="2025-05-17T00:17:05.686464829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:05.686603 systemd-networkd[1397]: calid68fc353d5b: Link UP May 17 00:17:05.688160 systemd-networkd[1397]: calid68fc353d5b: Gained carrier May 17 00:17:05.689835 containerd[1462]: time="2025-05-17T00:17:05.689768739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:17:05.691338 containerd[1462]: time="2025-05-17T00:17:05.691318298Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:05.695015 systemd[1]: Started cri-containerd-7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a.scope - libcontainer container 7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a. May 17 00:17:05.697394 containerd[1462]: time="2025-05-17T00:17:05.696815486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:05.701604 containerd[1462]: time="2025-05-17T00:17:05.701552808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 4.815821518s" May 17 00:17:05.701747 containerd[1462]: time="2025-05-17T00:17:05.701722957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:17:05.703628 containerd[1462]: time="2025-05-17T00:17:05.703404303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.309 [INFO][5126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0 coredns-668d6bf9bc- kube-system 2864e22f-90c3-40b7-81ed-054edc334c43 1133 0 2025-05-17 00:16:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xdkt4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid68fc353d5b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.309 [INFO][5126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.357 [INFO][5183] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" HandleID="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.357 [INFO][5183] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" HandleID="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e2fb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xdkt4", "timestamp":"2025-05-17 00:17:05.357164492 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.357 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.486 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.486 [INFO][5183] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.533 [INFO][5183] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.640 [INFO][5183] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.645 [INFO][5183] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.647 [INFO][5183] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.649 [INFO][5183] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.649 [INFO][5183] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.651 [INFO][5183] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9 May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.664 [INFO][5183] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.678 [INFO][5183] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.678 [INFO][5183] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" host="localhost" May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.678 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:05.706215 containerd[1462]: 2025-05-17 00:17:05.678 [INFO][5183] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" HandleID="k8s-pod-network.3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:05.706867 containerd[1462]: 2025-05-17 00:17:05.683 [INFO][5126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2864e22f-90c3-40b7-81ed-054edc334c43", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xdkt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid68fc353d5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:05.706867 containerd[1462]: 2025-05-17 00:17:05.683 [INFO][5126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:05.706867 containerd[1462]: 2025-05-17 00:17:05.683 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid68fc353d5b ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:05.706867 containerd[1462]: 2025-05-17 00:17:05.688 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:05.706867 containerd[1462]: 2025-05-17 00:17:05.689 [INFO][5126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2864e22f-90c3-40b7-81ed-054edc334c43", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9", Pod:"coredns-668d6bf9bc-xdkt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid68fc353d5b", MAC:"ea:97:4c:ad:5d:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:05.706867 containerd[1462]: 2025-05-17 00:17:05.700 [INFO][5126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-xdkt4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:05.707362 containerd[1462]: time="2025-05-17T00:17:05.707327887Z" level=info msg="CreateContainer within sandbox \"6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:17:05.715515 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:05.728979 containerd[1462]: time="2025-05-17T00:17:05.728929162Z" level=info msg="CreateContainer within sandbox \"6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8c263c050b5cfe0a8a68f75ee1bb4a55ef5b24875d0be869f70fe84b30a6d575\"" May 17 00:17:05.730029 containerd[1462]: time="2025-05-17T00:17:05.729870779Z" level=info msg="StartContainer for \"8c263c050b5cfe0a8a68f75ee1bb4a55ef5b24875d0be869f70fe84b30a6d575\"" May 17 00:17:05.734822 containerd[1462]: time="2025-05-17T00:17:05.733904931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:17:05.735187 containerd[1462]: time="2025-05-17T00:17:05.735099753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:17:05.735402 containerd[1462]: time="2025-05-17T00:17:05.735231099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:05.735698 containerd[1462]: time="2025-05-17T00:17:05.735528418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:17:05.758829 systemd[1]: Started cri-containerd-3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9.scope - libcontainer container 3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9. May 17 00:17:05.760400 containerd[1462]: time="2025-05-17T00:17:05.760364994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd784c9b6-wxxjz,Uid:a14251ec-79d4-49b3-94ab-87e70e4faa0d,Namespace:calico-system,Attempt:1,} returns sandbox id \"7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a\"" May 17 00:17:05.764129 systemd[1]: Started cri-containerd-8c263c050b5cfe0a8a68f75ee1bb4a55ef5b24875d0be869f70fe84b30a6d575.scope - libcontainer container 8c263c050b5cfe0a8a68f75ee1bb4a55ef5b24875d0be869f70fe84b30a6d575. May 17 00:17:05.780654 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:17:05.804653 kubelet[2482]: E0517 00:17:05.804624 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:05.808164 containerd[1462]: time="2025-05-17T00:17:05.808114216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xdkt4,Uid:2864e22f-90c3-40b7-81ed-054edc334c43,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9\"" May 17 00:17:05.808615 kubelet[2482]: E0517 00:17:05.808582 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:05.813079 containerd[1462]: time="2025-05-17T00:17:05.813046114Z" level=info msg="CreateContainer within sandbox \"3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:17:05.823114 kubelet[2482]: I0517 00:17:05.822264 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vmtqw" podStartSLOduration=52.822241593 podStartE2EDuration="52.822241593s" podCreationTimestamp="2025-05-17 00:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:17:05.821219114 +0000 UTC m=+59.391901870" watchObservedRunningTime="2025-05-17 00:17:05.822241593 +0000 UTC m=+59.392924339" May 17 00:17:05.830149 containerd[1462]: time="2025-05-17T00:17:05.829986289Z" level=info msg="StartContainer for \"8c263c050b5cfe0a8a68f75ee1bb4a55ef5b24875d0be869f70fe84b30a6d575\" returns successfully" May 17 00:17:05.844574 containerd[1462]: time="2025-05-17T00:17:05.843901457Z" level=info msg="CreateContainer within sandbox \"3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1166225ac5a37e27b665cf7f85d5853516000c8fea0e43701c8ec8ea48b65475\"" May 17 00:17:05.844935 containerd[1462]: time="2025-05-17T00:17:05.844719833Z" level=info msg="StartContainer for \"1166225ac5a37e27b665cf7f85d5853516000c8fea0e43701c8ec8ea48b65475\"" May 17 00:17:05.855237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637539886.mount: Deactivated successfully. May 17 00:17:05.879442 systemd[1]: run-containerd-runc-k8s.io-1166225ac5a37e27b665cf7f85d5853516000c8fea0e43701c8ec8ea48b65475-runc.QkcOkb.mount: Deactivated successfully. May 17 00:17:05.885820 systemd[1]: Started cri-containerd-1166225ac5a37e27b665cf7f85d5853516000c8fea0e43701c8ec8ea48b65475.scope - libcontainer container 1166225ac5a37e27b665cf7f85d5853516000c8fea0e43701c8ec8ea48b65475. May 17 00:17:05.915610 containerd[1462]: time="2025-05-17T00:17:05.915522104Z" level=info msg="StartContainer for \"1166225ac5a37e27b665cf7f85d5853516000c8fea0e43701c8ec8ea48b65475\" returns successfully" May 17 00:17:05.960056 containerd[1462]: time="2025-05-17T00:17:05.959752742Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:05.960884 containerd[1462]: time="2025-05-17T00:17:05.960840645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:05.961111 containerd[1462]: time="2025-05-17T00:17:05.960961592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:17:05.961281 kubelet[2482]: E0517 00:17:05.961229 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:05.961695 kubelet[2482]: E0517 00:17:05.961286 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:05.961695 kubelet[2482]: E0517 00:17:05.961507 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7fpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-8b9gs_calico-system(2af7b115-8c11-4444-9b1c-fa1f02b3517f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:05.962591 containerd[1462]: time="2025-05-17T00:17:05.962137729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:17:05.963861 kubelet[2482]: E0517 00:17:05.963563 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:17:06.340855 containerd[1462]: time="2025-05-17T00:17:06.340774551Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:06.341783 containerd[1462]: time="2025-05-17T00:17:06.341709766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:17:06.344133 containerd[1462]: time="2025-05-17T00:17:06.344099982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 381.916006ms" May 17 00:17:06.344220 containerd[1462]: time="2025-05-17T00:17:06.344138985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:17:06.348092 containerd[1462]: time="2025-05-17T00:17:06.348045618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:17:06.348884 containerd[1462]: time="2025-05-17T00:17:06.348845017Z" level=info msg="CreateContainer within sandbox \"7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:17:06.371267 containerd[1462]: time="2025-05-17T00:17:06.371215614Z" level=info msg="CreateContainer within sandbox \"7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"743dfea7f73bbc578b55818b8c7bec5a86f78b0f4bd80ff1a6f8374472fc2bfb\"" May 17 00:17:06.372889 containerd[1462]: time="2025-05-17T00:17:06.371806313Z" level=info msg="StartContainer for \"743dfea7f73bbc578b55818b8c7bec5a86f78b0f4bd80ff1a6f8374472fc2bfb\"" May 17 00:17:06.402983 systemd[1]: Started cri-containerd-743dfea7f73bbc578b55818b8c7bec5a86f78b0f4bd80ff1a6f8374472fc2bfb.scope - libcontainer container 743dfea7f73bbc578b55818b8c7bec5a86f78b0f4bd80ff1a6f8374472fc2bfb. May 17 00:17:06.449911 containerd[1462]: time="2025-05-17T00:17:06.449830264Z" level=info msg="StartContainer for \"743dfea7f73bbc578b55818b8c7bec5a86f78b0f4bd80ff1a6f8374472fc2bfb\" returns successfully" May 17 00:17:06.494439 containerd[1462]: time="2025-05-17T00:17:06.494143482Z" level=info msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\"" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.535 [WARNING][5510] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0", GenerateName:"calico-kube-controllers-7cd784c9b6-", Namespace:"calico-system", SelfLink:"", UID:"a14251ec-79d4-49b3-94ab-87e70e4faa0d", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd784c9b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a", Pod:"calico-kube-controllers-7cd784c9b6-wxxjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e364d3b92e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.536 [INFO][5510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.536 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" iface="eth0" netns="" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.536 [INFO][5510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.536 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.566 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.566 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.566 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.570 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.570 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.572 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:06.578060 containerd[1462]: 2025-05-17 00:17:06.575 [INFO][5510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.578754 containerd[1462]: time="2025-05-17T00:17:06.578728147Z" level=info msg="TearDown network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" successfully" May 17 00:17:06.578811 containerd[1462]: time="2025-05-17T00:17:06.578798609Z" level=info msg="StopPodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" returns successfully" May 17 00:17:06.586876 containerd[1462]: time="2025-05-17T00:17:06.586850862Z" level=info msg="RemovePodSandbox for \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\"" May 17 00:17:06.589321 containerd[1462]: time="2025-05-17T00:17:06.589080706Z" level=info msg="Forcibly stopping sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\"" May 17 00:17:06.591251 containerd[1462]: time="2025-05-17T00:17:06.591171832Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:06.595145 containerd[1462]: time="2025-05-17T00:17:06.595120312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:06.595899 containerd[1462]: time="2025-05-17T00:17:06.595203569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:17:06.595949 kubelet[2482]: E0517 00:17:06.595369 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:06.595949 kubelet[2482]: E0517 00:17:06.595409 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:06.595949 kubelet[2482]: E0517 00:17:06.595588 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1ce88814e3ca4076bf6dce8934cc9708,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxskl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94b97b964-84zct_calico-system(b5f93af1-14f8-4c4c-9d7c-56660fb8cf64): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:06.596842 containerd[1462]: time="2025-05-17T00:17:06.596824360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.624 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0", GenerateName:"calico-kube-controllers-7cd784c9b6-", Namespace:"calico-system", SelfLink:"", UID:"a14251ec-79d4-49b3-94ab-87e70e4faa0d", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd784c9b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a", Pod:"calico-kube-controllers-7cd784c9b6-wxxjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e364d3b92e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.624 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.624 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" iface="eth0" netns="" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.624 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.624 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.645 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.645 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.645 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.652 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.652 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" HandleID="k8s-pod-network.cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" Workload="localhost-k8s-calico--kube--controllers--7cd784c9b6--wxxjz-eth0" May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.653 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:06.659168 containerd[1462]: 2025-05-17 00:17:06.656 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a" May 17 00:17:06.659579 containerd[1462]: time="2025-05-17T00:17:06.659217844Z" level=info msg="TearDown network for sandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" successfully" May 17 00:17:06.667561 containerd[1462]: time="2025-05-17T00:17:06.667509076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:06.667730 containerd[1462]: time="2025-05-17T00:17:06.667584618Z" level=info msg="RemovePodSandbox \"cd865d742e4c5c548b10aa6732a1cfbc2251e4ba02f53bd89d73a8f63418436a\" returns successfully" May 17 00:17:06.668462 containerd[1462]: time="2025-05-17T00:17:06.668199241Z" level=info msg="StopPodSandbox for \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\"" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.702 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9cgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be1c8442-ea83-4a8e-9428-f2f62d4e4acf", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f", Pod:"csi-node-driver-9cgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b70ee9f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.702 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.702 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" iface="eth0" netns="" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.702 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.702 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.723 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.723 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.723 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.729 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.729 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.731 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:06.737184 containerd[1462]: 2025-05-17 00:17:06.734 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.737619 containerd[1462]: time="2025-05-17T00:17:06.737228931Z" level=info msg="TearDown network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\" successfully" May 17 00:17:06.737619 containerd[1462]: time="2025-05-17T00:17:06.737261442Z" level=info msg="StopPodSandbox for \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\" returns successfully" May 17 00:17:06.737887 containerd[1462]: time="2025-05-17T00:17:06.737814209Z" level=info msg="RemovePodSandbox for \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\"" May 17 00:17:06.737887 containerd[1462]: time="2025-05-17T00:17:06.737864092Z" level=info msg="Forcibly stopping sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\"" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.773 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9cgfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be1c8442-ea83-4a8e-9428-f2f62d4e4acf", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4445feca67b2b1d219496262814bc39f47e592329bd9ce3d804ccaf3540a167f", Pod:"csi-node-driver-9cgfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b70ee9f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.773 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.773 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" iface="eth0" netns="" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.773 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.773 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.794 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.794 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.794 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.800 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.800 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" HandleID="k8s-pod-network.0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" Workload="localhost-k8s-csi--node--driver--9cgfz-eth0" May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.802 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:06.809834 containerd[1462]: 2025-05-17 00:17:06.805 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374" May 17 00:17:06.810516 containerd[1462]: time="2025-05-17T00:17:06.809900815Z" level=info msg="TearDown network for sandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\" successfully" May 17 00:17:06.815493 containerd[1462]: time="2025-05-17T00:17:06.815290882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:06.815493 containerd[1462]: time="2025-05-17T00:17:06.815360111Z" level=info msg="RemovePodSandbox \"0dfcc3297e5b5c82ef3caa4bd2990dfd38e8b8fc3e3b8e58870cb0946dce6374\" returns successfully" May 17 00:17:06.817509 containerd[1462]: time="2025-05-17T00:17:06.817461336Z" level=info msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\"" May 17 00:17:06.824057 kubelet[2482]: E0517 00:17:06.824011 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:06.833873 kubelet[2482]: E0517 00:17:06.833823 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:06.836251 kubelet[2482]: E0517 00:17:06.836224 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:17:06.850967 kubelet[2482]: I0517 00:17:06.849496 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67f459565f-9ljdj" podStartSLOduration=41.031882288 podStartE2EDuration="45.84948019s" podCreationTimestamp="2025-05-17 00:16:21 +0000 UTC" firstStartedPulling="2025-05-17 00:17:00.885333903 +0000 UTC m=+54.456016660" lastFinishedPulling="2025-05-17 00:17:05.702931806 +0000 UTC m=+59.273614562" observedRunningTime="2025-05-17 00:17:06.828322931 +0000 UTC m=+60.399005687" watchObservedRunningTime="2025-05-17 00:17:06.84948019 +0000 UTC m=+60.420162946" May 17 00:17:06.850967 kubelet[2482]: I0517 00:17:06.849770 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xdkt4" podStartSLOduration=53.849764334 podStartE2EDuration="53.849764334s" podCreationTimestamp="2025-05-17 00:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:17:06.848993166 +0000 UTC m=+60.419675932" watchObservedRunningTime="2025-05-17 00:17:06.849764334 +0000 UTC m=+60.420447090" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.889 [WARNING][5617] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"54da1e60-c26d-45aa-84ac-d213e8845274", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173", Pod:"calico-apiserver-67f459565f-9ljdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8527b3d1f89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.889 [INFO][5617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.889 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" iface="eth0" netns="" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.889 [INFO][5617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.890 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.914 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.914 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.914 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.924 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.924 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.926 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:06.932782 containerd[1462]: 2025-05-17 00:17:06.929 [INFO][5617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:06.933816 containerd[1462]: time="2025-05-17T00:17:06.932845986Z" level=info msg="TearDown network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" successfully" May 17 00:17:06.933868 containerd[1462]: time="2025-05-17T00:17:06.933816808Z" level=info msg="StopPodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" returns successfully" May 17 00:17:06.934708 containerd[1462]: time="2025-05-17T00:17:06.934659129Z" level=info msg="RemovePodSandbox for \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\"" May 17 00:17:06.934708 containerd[1462]: time="2025-05-17T00:17:06.934706838Z" level=info msg="Forcibly stopping sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\"" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.970 [WARNING][5646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"54da1e60-c26d-45aa-84ac-d213e8845274", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6809b9cf16b4b703761a86404c1e0c4dfb7c80e69c95df4edcfbc54611ec0173", Pod:"calico-apiserver-67f459565f-9ljdj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8527b3d1f89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.970 [INFO][5646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.970 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" iface="eth0" netns="" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.970 [INFO][5646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.970 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.991 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.991 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.991 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.998 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:06.998 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" HandleID="k8s-pod-network.75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" Workload="localhost-k8s-calico--apiserver--67f459565f--9ljdj-eth0" May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:07.000 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.005274 containerd[1462]: 2025-05-17 00:17:07.002 [INFO][5646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607" May 17 00:17:07.005670 containerd[1462]: time="2025-05-17T00:17:07.005323025Z" level=info msg="TearDown network for sandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" successfully" May 17 00:17:07.009877 containerd[1462]: time="2025-05-17T00:17:07.009828521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:07.009929 containerd[1462]: time="2025-05-17T00:17:07.009898192Z" level=info msg="RemovePodSandbox \"75b2d677f2f051a610d722674ec7a4ff5acb3ac42ff2303d39ad0bf92ae0c607\" returns successfully" May 17 00:17:07.010381 containerd[1462]: time="2025-05-17T00:17:07.010353587Z" level=info msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\"" May 17 00:17:07.022832 systemd-networkd[1397]: calid68fc353d5b: Gained IPv6LL May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.044 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e170e92d-6fac-4790-9e44-4b5889f835a0", ResourceVersion:"1221", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393", Pod:"calico-apiserver-67f459565f-mjks8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2c254a51b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.044 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.044 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" iface="eth0" netns="" May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.044 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.044 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.064 [INFO][5681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.064 [INFO][5681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.064 [INFO][5681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.069 [WARNING][5681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.069 [INFO][5681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.070 [INFO][5681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.075492 containerd[1462]: 2025-05-17 00:17:07.072 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.075492 containerd[1462]: time="2025-05-17T00:17:07.075565135Z" level=info msg="TearDown network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" successfully" May 17 00:17:07.075492 containerd[1462]: time="2025-05-17T00:17:07.075592356Z" level=info msg="StopPodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" returns successfully" May 17 00:17:07.076595 containerd[1462]: time="2025-05-17T00:17:07.076548741Z" level=info msg="RemovePodSandbox for \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\"" May 17 00:17:07.076595 containerd[1462]: time="2025-05-17T00:17:07.076588095Z" level=info msg="Forcibly stopping sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\"" May 17 00:17:07.087820 systemd-networkd[1397]: calia0c3bd7a2a6: Gained IPv6LL May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.113 [WARNING][5699] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0", GenerateName:"calico-apiserver-67f459565f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e170e92d-6fac-4790-9e44-4b5889f835a0", ResourceVersion:"1221", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f459565f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7584d10f0b151c000ca0a6d9747d4efc5bea9209ab64fc91b5e5586a9a163393", Pod:"calico-apiserver-67f459565f-mjks8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2c254a51b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.113 [INFO][5699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.113 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" iface="eth0" netns="" May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.113 [INFO][5699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.113 [INFO][5699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.133 [INFO][5708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.133 [INFO][5708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.133 [INFO][5708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.146 [WARNING][5708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.146 [INFO][5708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" HandleID="k8s-pod-network.2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" Workload="localhost-k8s-calico--apiserver--67f459565f--mjks8-eth0" May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.147 [INFO][5708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.152954 containerd[1462]: 2025-05-17 00:17:07.150 [INFO][5699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176" May 17 00:17:07.153446 containerd[1462]: time="2025-05-17T00:17:07.153418500Z" level=info msg="TearDown network for sandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" successfully" May 17 00:17:07.281117 containerd[1462]: time="2025-05-17T00:17:07.281052561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:07.281238 containerd[1462]: time="2025-05-17T00:17:07.281130117Z" level=info msg="RemovePodSandbox \"2da30ffbf58aea6ed11c9b047013f63f9037c864f81523002f038113a3a48176\" returns successfully" May 17 00:17:07.281710 containerd[1462]: time="2025-05-17T00:17:07.281547250Z" level=info msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\"" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.323 [WARNING][5726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2864e22f-90c3-40b7-81ed-054edc334c43", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9", Pod:"coredns-668d6bf9bc-xdkt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid68fc353d5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.323 [INFO][5726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.323 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" iface="eth0" netns="" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.323 [INFO][5726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.323 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.346 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.346 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.346 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.352 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.352 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.354 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.360422 containerd[1462]: 2025-05-17 00:17:07.357 [INFO][5726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.361652 containerd[1462]: time="2025-05-17T00:17:07.360477325Z" level=info msg="TearDown network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" successfully" May 17 00:17:07.361652 containerd[1462]: time="2025-05-17T00:17:07.360510407Z" level=info msg="StopPodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" returns successfully" May 17 00:17:07.361652 containerd[1462]: time="2025-05-17T00:17:07.361064106Z" level=info msg="RemovePodSandbox for \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\"" May 17 00:17:07.361652 containerd[1462]: time="2025-05-17T00:17:07.361115512Z" level=info msg="Forcibly stopping sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\"" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.396 [WARNING][5753] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2864e22f-90c3-40b7-81ed-054edc334c43", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c4786ed64b1dba9e8472459748ac5d5cf70a5cb4d310a06659e6a7f7181a9d9", Pod:"coredns-668d6bf9bc-xdkt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid68fc353d5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.396 [INFO][5753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.396 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" iface="eth0" netns="" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.396 [INFO][5753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.396 [INFO][5753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.419 [INFO][5762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.420 [INFO][5762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.420 [INFO][5762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.424 [WARNING][5762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.425 [INFO][5762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" HandleID="k8s-pod-network.bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" Workload="localhost-k8s-coredns--668d6bf9bc--xdkt4-eth0" May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.426 [INFO][5762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.432080 containerd[1462]: 2025-05-17 00:17:07.429 [INFO][5753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358" May 17 00:17:07.432750 containerd[1462]: time="2025-05-17T00:17:07.432607197Z" level=info msg="TearDown network for sandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" successfully" May 17 00:17:07.439202 containerd[1462]: time="2025-05-17T00:17:07.439152982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:07.439281 containerd[1462]: time="2025-05-17T00:17:07.439240306Z" level=info msg="RemovePodSandbox \"bda921156c6917079a47113956485663969ab5ebd801ce239321a0d1fae6d358\" returns successfully" May 17 00:17:07.439813 containerd[1462]: time="2025-05-17T00:17:07.439780400Z" level=info msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\"" May 17 00:17:07.534936 systemd-networkd[1397]: cali6e364d3b92e: Gained IPv6LL May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.515 [WARNING][5780] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"2af7b115-8c11-4444-9b1c-fa1f02b3517f", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b", Pod:"goldmane-78d55f7ddc-8b9gs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f4e9f3010a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.516 [INFO][5780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.516 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" iface="eth0" netns="" May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.516 [INFO][5780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.516 [INFO][5780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.540 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.540 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.540 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.561 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.561 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.563 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.568978 containerd[1462]: 2025-05-17 00:17:07.566 [INFO][5780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.569329 containerd[1462]: time="2025-05-17T00:17:07.569037218Z" level=info msg="TearDown network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" successfully" May 17 00:17:07.569329 containerd[1462]: time="2025-05-17T00:17:07.569066873Z" level=info msg="StopPodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" returns successfully" May 17 00:17:07.569593 containerd[1462]: time="2025-05-17T00:17:07.569557585Z" level=info msg="RemovePodSandbox for \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\"" May 17 00:17:07.569593 containerd[1462]: time="2025-05-17T00:17:07.569590627Z" level=info msg="Forcibly stopping sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\"" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.672 [WARNING][5807] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"2af7b115-8c11-4444-9b1c-fa1f02b3517f", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"522ddbd5bd54a8e35666acfc4c0a57fae531b5a6b5d960d1bfaeacd05d2abc3b", Pod:"goldmane-78d55f7ddc-8b9gs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f4e9f3010a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.672 [INFO][5807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.672 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" iface="eth0" netns="" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.672 [INFO][5807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.672 [INFO][5807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.691 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.692 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.692 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.724 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.724 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" HandleID="k8s-pod-network.4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" Workload="localhost-k8s-goldmane--78d55f7ddc--8b9gs-eth0" May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.726 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.731320 containerd[1462]: 2025-05-17 00:17:07.728 [INFO][5807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8" May 17 00:17:07.731320 containerd[1462]: time="2025-05-17T00:17:07.731284792Z" level=info msg="TearDown network for sandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" successfully" May 17 00:17:07.776800 containerd[1462]: time="2025-05-17T00:17:07.776748468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:07.776890 containerd[1462]: time="2025-05-17T00:17:07.776868804Z" level=info msg="RemovePodSandbox \"4938c567ff78bd9dd6cb8f0fc194316d16e5af80a13b0a5796aa3189316028d8\" returns successfully" May 17 00:17:07.777395 containerd[1462]: time="2025-05-17T00:17:07.777357210Z" level=info msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\"" May 17 00:17:07.837721 kubelet[2482]: E0517 00:17:07.837579 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:07.839911 kubelet[2482]: I0517 00:17:07.839895 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:17:07.847275 kubelet[2482]: E0517 00:17:07.847244 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.827 [WARNING][5834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80", Pod:"coredns-668d6bf9bc-vmtqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24f1ea739e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.828 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.828 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" iface="eth0" netns="" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.828 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.828 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.856 [INFO][5843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.857 [INFO][5843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.857 [INFO][5843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.862 [WARNING][5843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.862 [INFO][5843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.864 [INFO][5843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.871760 containerd[1462]: 2025-05-17 00:17:07.868 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.872443 containerd[1462]: time="2025-05-17T00:17:07.871812914Z" level=info msg="TearDown network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" successfully" May 17 00:17:07.872443 containerd[1462]: time="2025-05-17T00:17:07.871855373Z" level=info msg="StopPodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" returns successfully" May 17 00:17:07.872643 containerd[1462]: time="2025-05-17T00:17:07.872607625Z" level=info msg="RemovePodSandbox for \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\"" May 17 00:17:07.872725 containerd[1462]: time="2025-05-17T00:17:07.872656657Z" level=info msg="Forcibly stopping sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\"" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.907 [WARNING][5861] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fbfa8f9e-2caa-4166-b768-e488cc5c9d0d", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fabfb0d91dc21044c416857aa324dd95e83a6a556809d349f417448cdd0d0a80", Pod:"coredns-668d6bf9bc-vmtqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24f1ea739e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.908 [INFO][5861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.908 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" iface="eth0" netns="" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.908 [INFO][5861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.908 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.931 [INFO][5871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.931 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.931 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.938 [WARNING][5871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.938 [INFO][5871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" HandleID="k8s-pod-network.fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" Workload="localhost-k8s-coredns--668d6bf9bc--vmtqw-eth0" May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.940 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:07.947663 containerd[1462]: 2025-05-17 00:17:07.943 [INFO][5861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116" May 17 00:17:07.948532 containerd[1462]: time="2025-05-17T00:17:07.948491173Z" level=info msg="TearDown network for sandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" successfully" May 17 00:17:07.956896 containerd[1462]: time="2025-05-17T00:17:07.956850071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:07.956961 containerd[1462]: time="2025-05-17T00:17:07.956928769Z" level=info msg="RemovePodSandbox \"fb9c42b5199ffc95d15f4a01a65da88ebce89676e28b0c5774a1365d70a16116\" returns successfully" May 17 00:17:07.957502 containerd[1462]: time="2025-05-17T00:17:07.957468160Z" level=info msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\"" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.002 [WARNING][5891] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.002 [INFO][5891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.002 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" iface="eth0" netns="" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.002 [INFO][5891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.002 [INFO][5891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.041 [INFO][5900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.042 [INFO][5900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.042 [INFO][5900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.071 [WARNING][5900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.071 [INFO][5900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.074 [INFO][5900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:08.089546 containerd[1462]: 2025-05-17 00:17:08.082 [INFO][5891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.090335 containerd[1462]: time="2025-05-17T00:17:08.089588938Z" level=info msg="TearDown network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" successfully" May 17 00:17:08.090335 containerd[1462]: time="2025-05-17T00:17:08.089615307Z" level=info msg="StopPodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" returns successfully" May 17 00:17:08.090587 containerd[1462]: time="2025-05-17T00:17:08.090437099Z" level=info msg="RemovePodSandbox for \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\"" May 17 00:17:08.090587 containerd[1462]: time="2025-05-17T00:17:08.090468788Z" level=info msg="Forcibly stopping sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\"" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.140 [WARNING][5918] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.140 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.140 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" iface="eth0" netns="" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.140 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.141 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.163 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.163 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.163 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.179 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.179 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" HandleID="k8s-pod-network.457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.181 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:08.186103 containerd[1462]: 2025-05-17 00:17:08.183 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e" May 17 00:17:08.186490 containerd[1462]: time="2025-05-17T00:17:08.186167388Z" level=info msg="TearDown network for sandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" successfully" May 17 00:17:08.396752 containerd[1462]: time="2025-05-17T00:17:08.396614129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:08.396752 containerd[1462]: time="2025-05-17T00:17:08.396700741Z" level=info msg="RemovePodSandbox \"457686ac4c45989e17e3766b2044893b95f73bd9cc62cdd30599e7109dc5379e\" returns successfully" May 17 00:17:08.397501 containerd[1462]: time="2025-05-17T00:17:08.397135547Z" level=info msg="StopPodSandbox for \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\"" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.431 [WARNING][5949] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.431 [INFO][5949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.431 [INFO][5949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" iface="eth0" netns="" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.431 [INFO][5949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.431 [INFO][5949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.451 [INFO][5958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.452 [INFO][5958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.452 [INFO][5958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.457 [WARNING][5958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.457 [INFO][5958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.458 [INFO][5958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:08.464114 containerd[1462]: 2025-05-17 00:17:08.461 [INFO][5949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.464522 containerd[1462]: time="2025-05-17T00:17:08.464160294Z" level=info msg="TearDown network for sandbox \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\" successfully" May 17 00:17:08.464522 containerd[1462]: time="2025-05-17T00:17:08.464186754Z" level=info msg="StopPodSandbox for \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\" returns successfully" May 17 00:17:08.464928 containerd[1462]: time="2025-05-17T00:17:08.464894071Z" level=info msg="RemovePodSandbox for \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\"" May 17 00:17:08.465120 containerd[1462]: time="2025-05-17T00:17:08.464933646Z" level=info msg="Forcibly stopping sandbox \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\"" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.499 [WARNING][5975] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" WorkloadEndpoint="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.499 [INFO][5975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.499 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" iface="eth0" netns="" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.499 [INFO][5975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.499 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.522 [INFO][5983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.522 [INFO][5983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.522 [INFO][5983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.528 [WARNING][5983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.528 [INFO][5983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" HandleID="k8s-pod-network.564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" Workload="localhost-k8s-whisker--5ff7b45b78--h4g9j-eth0" May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.530 [INFO][5983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:17:08.535869 containerd[1462]: 2025-05-17 00:17:08.532 [INFO][5975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e" May 17 00:17:08.536642 containerd[1462]: time="2025-05-17T00:17:08.535940956Z" level=info msg="TearDown network for sandbox \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\" successfully" May 17 00:17:08.541608 containerd[1462]: time="2025-05-17T00:17:08.541575601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:17:08.541667 containerd[1462]: time="2025-05-17T00:17:08.541635022Z" level=info msg="RemovePodSandbox \"564e0ac929c96a7bf9875e92dda36b76c158a9eb4636f1e00854b8137190083e\" returns successfully" May 17 00:17:08.851382 kubelet[2482]: E0517 00:17:08.851138 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:10.055061 containerd[1462]: time="2025-05-17T00:17:10.055013102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:10.057356 containerd[1462]: time="2025-05-17T00:17:10.057308189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:17:10.059505 containerd[1462]: time="2025-05-17T00:17:10.059471860Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:10.063252 containerd[1462]: time="2025-05-17T00:17:10.062646477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:17:10.064216 containerd[1462]: time="2025-05-17T00:17:10.063446880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.466525166s" May 17 00:17:10.064216 containerd[1462]: time="2025-05-17T00:17:10.063475694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:17:10.064902 containerd[1462]: time="2025-05-17T00:17:10.064878836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:17:10.085146 containerd[1462]: time="2025-05-17T00:17:10.085102999Z" level=info msg="CreateContainer within sandbox \"7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:17:10.102174 containerd[1462]: time="2025-05-17T00:17:10.101989669Z" level=info msg="CreateContainer within sandbox \"7a6017e218af2b70b6c1ec1e446869fd4e2d6db9948283d5821098639dc8ad5a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"97b52eab237246d0aac769a7f37da0ac8783115d752bd0f09d1b211ce2dacf04\"" May 17 00:17:10.103237 containerd[1462]: time="2025-05-17T00:17:10.103217723Z" level=info msg="StartContainer for \"97b52eab237246d0aac769a7f37da0ac8783115d752bd0f09d1b211ce2dacf04\"" May 17 00:17:10.148886 systemd[1]: Started cri-containerd-97b52eab237246d0aac769a7f37da0ac8783115d752bd0f09d1b211ce2dacf04.scope - libcontainer container 97b52eab237246d0aac769a7f37da0ac8783115d752bd0f09d1b211ce2dacf04. May 17 00:17:10.191609 containerd[1462]: time="2025-05-17T00:17:10.191561921Z" level=info msg="StartContainer for \"97b52eab237246d0aac769a7f37da0ac8783115d752bd0f09d1b211ce2dacf04\" returns successfully" May 17 00:17:10.310880 containerd[1462]: time="2025-05-17T00:17:10.310755939Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:10.311912 containerd[1462]: time="2025-05-17T00:17:10.311880480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:10.311988 containerd[1462]: time="2025-05-17T00:17:10.311944670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:17:10.312113 kubelet[2482]: E0517 00:17:10.312076 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:10.312654 kubelet[2482]: E0517 00:17:10.312124 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:10.314518 kubelet[2482]: E0517 00:17:10.312236 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxskl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94b97b964-84zct_calico-system(b5f93af1-14f8-4c4c-9d7c-56660fb8cf64): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:10.317765 kubelet[2482]: E0517 00:17:10.316495 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-94b97b964-84zct" podUID="b5f93af1-14f8-4c4c-9d7c-56660fb8cf64" May 17 00:17:10.467340 systemd[1]: Started sshd@14-10.0.0.66:22-10.0.0.1:37268.service - OpenSSH per-connection server daemon (10.0.0.1:37268). May 17 00:17:10.516242 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 37268 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:10.517968 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:10.521908 systemd-logind[1446]: New session 15 of user core. May 17 00:17:10.533823 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:17:10.704364 sshd[6045]: pam_unix(sshd:session): session closed for user core May 17 00:17:10.708864 systemd[1]: sshd@14-10.0.0.66:22-10.0.0.1:37268.service: Deactivated successfully. May 17 00:17:10.711041 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:17:10.711755 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. May 17 00:17:10.713227 systemd-logind[1446]: Removed session 15. May 17 00:17:10.858276 kubelet[2482]: E0517 00:17:10.858206 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-94b97b964-84zct" podUID="b5f93af1-14f8-4c4c-9d7c-56660fb8cf64" May 17 00:17:10.959941 kubelet[2482]: I0517 00:17:10.958437 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67f459565f-mjks8" podStartSLOduration=45.906125998 podStartE2EDuration="49.958416088s" podCreationTimestamp="2025-05-17 00:16:21 +0000 UTC" firstStartedPulling="2025-05-17 00:17:02.294937953 +0000 UTC m=+55.865620709" lastFinishedPulling="2025-05-17 00:17:06.347228043 +0000 UTC m=+59.917910799" observedRunningTime="2025-05-17 00:17:06.933367004 +0000 UTC m=+60.504049760" watchObservedRunningTime="2025-05-17 00:17:10.958416088 +0000 UTC m=+64.529098845" May 17 00:17:11.961323 kubelet[2482]: I0517 00:17:11.961092 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cd784c9b6-wxxjz" podStartSLOduration=43.658423588 podStartE2EDuration="47.961059255s" podCreationTimestamp="2025-05-17 00:16:24 +0000 UTC" firstStartedPulling="2025-05-17 00:17:05.762055848 +0000 UTC m=+59.332738604" lastFinishedPulling="2025-05-17 00:17:10.064691515 +0000 UTC m=+63.635374271" observedRunningTime="2025-05-17 00:17:11.01589894 +0000 UTC m=+64.586581696" watchObservedRunningTime="2025-05-17 00:17:11.961059255 +0000 UTC m=+65.531742011" May 17 00:17:15.718303 systemd[1]: Started sshd@15-10.0.0.66:22-10.0.0.1:37274.service - OpenSSH per-connection server daemon (10.0.0.1:37274). May 17 00:17:15.755980 sshd[6098]: Accepted publickey for core from 10.0.0.1 port 37274 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:15.757647 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:15.761647 systemd-logind[1446]: New session 16 of user core. May 17 00:17:15.768885 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:17:15.886148 sshd[6098]: pam_unix(sshd:session): session closed for user core May 17 00:17:15.891190 systemd[1]: sshd@15-10.0.0.66:22-10.0.0.1:37274.service: Deactivated successfully. May 17 00:17:15.893079 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:17:15.893808 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. May 17 00:17:15.894628 systemd-logind[1446]: Removed session 16. May 17 00:17:20.897646 systemd[1]: Started sshd@16-10.0.0.66:22-10.0.0.1:47198.service - OpenSSH per-connection server daemon (10.0.0.1:47198). May 17 00:17:20.949468 sshd[6159]: Accepted publickey for core from 10.0.0.1 port 47198 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:20.951509 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:20.955821 systemd-logind[1446]: New session 17 of user core. May 17 00:17:20.964790 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:17:21.110036 sshd[6159]: pam_unix(sshd:session): session closed for user core May 17 00:17:21.114088 systemd[1]: sshd@16-10.0.0.66:22-10.0.0.1:47198.service: Deactivated successfully. May 17 00:17:21.115976 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:17:21.116618 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. May 17 00:17:21.117609 systemd-logind[1446]: Removed session 17. May 17 00:17:21.502257 containerd[1462]: time="2025-05-17T00:17:21.502212543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:17:21.795490 containerd[1462]: time="2025-05-17T00:17:21.795351965Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:21.796666 containerd[1462]: time="2025-05-17T00:17:21.796634524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:21.796761 containerd[1462]: time="2025-05-17T00:17:21.796655385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:17:21.796951 kubelet[2482]: E0517 00:17:21.796895 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:21.797416 kubelet[2482]: E0517 00:17:21.796960 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:21.797416 kubelet[2482]: E0517 00:17:21.797109 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7fpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-8b9gs_calico-system(2af7b115-8c11-4444-9b1c-fa1f02b3517f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:21.798255 kubelet[2482]: E0517 00:17:21.798222 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:17:24.504330 containerd[1462]: time="2025-05-17T00:17:24.503955843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:17:24.755599 containerd[1462]: time="2025-05-17T00:17:24.755459107Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:24.756661 containerd[1462]: time="2025-05-17T00:17:24.756614459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:24.756780 containerd[1462]: time="2025-05-17T00:17:24.756751732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:17:24.756949 kubelet[2482]: E0517 00:17:24.756890 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:24.757302 kubelet[2482]: E0517 00:17:24.756951 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:24.757302 kubelet[2482]: E0517 00:17:24.757062 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1ce88814e3ca4076bf6dce8934cc9708,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxskl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94b97b964-84zct_calico-system(b5f93af1-14f8-4c4c-9d7c-56660fb8cf64): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:24.759416 containerd[1462]: time="2025-05-17T00:17:24.759230848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:17:25.034004 containerd[1462]: time="2025-05-17T00:17:25.033841514Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:25.042278 containerd[1462]: time="2025-05-17T00:17:25.042209304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:25.042361 containerd[1462]: time="2025-05-17T00:17:25.042308744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:17:25.042572 kubelet[2482]: E0517 00:17:25.042510 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:25.042750 kubelet[2482]: E0517 00:17:25.042575 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:25.042822 kubelet[2482]: E0517 00:17:25.042717 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxskl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94b97b964-84zct_calico-system(b5f93af1-14f8-4c4c-9d7c-56660fb8cf64): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:25.044094 kubelet[2482]: E0517 00:17:25.044048 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-94b97b964-84zct" podUID="b5f93af1-14f8-4c4c-9d7c-56660fb8cf64" May 17 00:17:26.121642 systemd[1]: Started sshd@17-10.0.0.66:22-10.0.0.1:47208.service - OpenSSH per-connection server daemon (10.0.0.1:47208). May 17 00:17:26.161951 sshd[6175]: Accepted publickey for core from 10.0.0.1 port 47208 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:26.163560 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:26.167808 systemd-logind[1446]: New session 18 of user core. May 17 00:17:26.175857 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:17:26.356120 sshd[6175]: pam_unix(sshd:session): session closed for user core May 17 00:17:26.369308 systemd[1]: sshd@17-10.0.0.66:22-10.0.0.1:47208.service: Deactivated successfully. May 17 00:17:26.371792 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:17:26.374489 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. May 17 00:17:26.382047 systemd[1]: Started sshd@18-10.0.0.66:22-10.0.0.1:47224.service - OpenSSH per-connection server daemon (10.0.0.1:47224). May 17 00:17:26.382905 systemd-logind[1446]: Removed session 18. May 17 00:17:26.412384 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 47224 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:26.414039 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:26.418269 systemd-logind[1446]: New session 19 of user core. May 17 00:17:26.424803 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:17:26.620615 sshd[6189]: pam_unix(sshd:session): session closed for user core May 17 00:17:26.628772 systemd[1]: sshd@18-10.0.0.66:22-10.0.0.1:47224.service: Deactivated successfully. May 17 00:17:26.630585 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:17:26.631579 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. May 17 00:17:26.638055 systemd[1]: Started sshd@19-10.0.0.66:22-10.0.0.1:47240.service - OpenSSH per-connection server daemon (10.0.0.1:47240). May 17 00:17:26.639194 systemd-logind[1446]: Removed session 19. May 17 00:17:26.681916 sshd[6201]: Accepted publickey for core from 10.0.0.1 port 47240 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:26.683712 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:26.687860 systemd-logind[1446]: New session 20 of user core. May 17 00:17:26.692817 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:17:27.373811 kubelet[2482]: I0517 00:17:27.373758 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:17:27.653211 sshd[6201]: pam_unix(sshd:session): session closed for user core May 17 00:17:27.662850 systemd[1]: sshd@19-10.0.0.66:22-10.0.0.1:47240.service: Deactivated successfully. May 17 00:17:27.666011 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:17:27.667933 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. May 17 00:17:27.676063 systemd[1]: Started sshd@20-10.0.0.66:22-10.0.0.1:47242.service - OpenSSH per-connection server daemon (10.0.0.1:47242). May 17 00:17:27.677002 systemd-logind[1446]: Removed session 20. May 17 00:17:27.705423 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 47242 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:27.707108 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:27.711467 systemd-logind[1446]: New session 21 of user core. May 17 00:17:27.718816 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:17:28.016913 sshd[6222]: pam_unix(sshd:session): session closed for user core May 17 00:17:28.025730 systemd[1]: sshd@20-10.0.0.66:22-10.0.0.1:47242.service: Deactivated successfully. May 17 00:17:28.027404 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:17:28.029053 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. May 17 00:17:28.037148 systemd[1]: Started sshd@21-10.0.0.66:22-10.0.0.1:53380.service - OpenSSH per-connection server daemon (10.0.0.1:53380). May 17 00:17:28.038521 systemd-logind[1446]: Removed session 21. May 17 00:17:28.076774 sshd[6234]: Accepted publickey for core from 10.0.0.1 port 53380 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:28.078638 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:28.082872 systemd-logind[1446]: New session 22 of user core. May 17 00:17:28.089818 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:17:28.201326 sshd[6234]: pam_unix(sshd:session): session closed for user core May 17 00:17:28.205761 systemd[1]: sshd@21-10.0.0.66:22-10.0.0.1:53380.service: Deactivated successfully. May 17 00:17:28.207635 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:17:28.208349 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. May 17 00:17:28.209263 systemd-logind[1446]: Removed session 22. May 17 00:17:28.501702 kubelet[2482]: E0517 00:17:28.501634 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:31.502134 kubelet[2482]: E0517 00:17:31.502090 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:33.212945 systemd[1]: Started sshd@22-10.0.0.66:22-10.0.0.1:53394.service - OpenSSH per-connection server daemon (10.0.0.1:53394). May 17 00:17:33.247071 sshd[6252]: Accepted publickey for core from 10.0.0.1 port 53394 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:33.248574 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:33.252375 systemd-logind[1446]: New session 23 of user core. May 17 00:17:33.258799 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:17:33.367129 sshd[6252]: pam_unix(sshd:session): session closed for user core May 17 00:17:33.371219 systemd[1]: sshd@22-10.0.0.66:22-10.0.0.1:53394.service: Deactivated successfully. May 17 00:17:33.373104 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:17:33.373817 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. May 17 00:17:33.374666 systemd-logind[1446]: Removed session 23. May 17 00:17:35.502795 kubelet[2482]: E0517 00:17:35.502741 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:17:37.502109 kubelet[2482]: E0517 00:17:37.502041 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:38.382891 systemd[1]: Started sshd@23-10.0.0.66:22-10.0.0.1:34788.service - OpenSSH per-connection server daemon (10.0.0.1:34788). May 17 00:17:38.417957 sshd[6275]: Accepted publickey for core from 10.0.0.1 port 34788 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:38.419496 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:38.423285 systemd-logind[1446]: New session 24 of user core. May 17 00:17:38.431807 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:17:38.534865 sshd[6275]: pam_unix(sshd:session): session closed for user core May 17 00:17:38.538350 systemd[1]: sshd@23-10.0.0.66:22-10.0.0.1:34788.service: Deactivated successfully. May 17 00:17:38.540127 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:17:38.540697 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. May 17 00:17:38.541520 systemd-logind[1446]: Removed session 24. May 17 00:17:40.502970 kubelet[2482]: E0517 00:17:40.502886 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-94b97b964-84zct" podUID="b5f93af1-14f8-4c4c-9d7c-56660fb8cf64" May 17 00:17:43.501647 kubelet[2482]: E0517 00:17:43.501607 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:17:43.548517 systemd[1]: Started sshd@24-10.0.0.66:22-10.0.0.1:34800.service - OpenSSH per-connection server daemon (10.0.0.1:34800). May 17 00:17:43.585484 sshd[6312]: Accepted publickey for core from 10.0.0.1 port 34800 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:43.587338 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:43.591419 systemd-logind[1446]: New session 25 of user core. May 17 00:17:43.600884 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:17:43.708486 sshd[6312]: pam_unix(sshd:session): session closed for user core May 17 00:17:43.712148 systemd[1]: sshd@24-10.0.0.66:22-10.0.0.1:34800.service: Deactivated successfully. May 17 00:17:43.714242 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:17:43.715028 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. May 17 00:17:43.716205 systemd-logind[1446]: Removed session 25. May 17 00:17:47.503060 containerd[1462]: time="2025-05-17T00:17:47.503004578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:17:47.757284 containerd[1462]: time="2025-05-17T00:17:47.757128482Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:47.760940 containerd[1462]: time="2025-05-17T00:17:47.760883075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:47.761048 containerd[1462]: time="2025-05-17T00:17:47.760955292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:17:47.761191 kubelet[2482]: E0517 00:17:47.761147 2482 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:47.761598 kubelet[2482]: E0517 00:17:47.761203 2482 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:47.761598 kubelet[2482]: E0517 00:17:47.761340 2482 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7fpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-8b9gs_calico-system(2af7b115-8c11-4444-9b1c-fa1f02b3517f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:47.763368 kubelet[2482]: E0517 00:17:47.763315 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-8b9gs" podUID="2af7b115-8c11-4444-9b1c-fa1f02b3517f" May 17 00:17:48.733095 systemd[1]: Started sshd@25-10.0.0.66:22-10.0.0.1:56860.service - OpenSSH per-connection server daemon (10.0.0.1:56860). May 17 00:17:48.781350 sshd[6329]: Accepted publickey for core from 10.0.0.1 port 56860 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:17:48.783873 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:17:48.789342 systemd-logind[1446]: New session 26 of user core. May 17 00:17:48.793939 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:17:49.020874 sshd[6329]: pam_unix(sshd:session): session closed for user core May 17 00:17:49.025611 systemd[1]: sshd@25-10.0.0.66:22-10.0.0.1:56860.service: Deactivated successfully. May 17 00:17:49.029049 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:17:49.029999 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. May 17 00:17:49.031179 systemd-logind[1446]: Removed session 26. May 17 00:17:50.779807 systemd[1]: run-containerd-runc-k8s.io-837846e8e28ef597e91e2c33f6de41bf2110d256fa2ca7819e9e23b20a750a39-runc.KaBIRu.mount: Deactivated successfully.