May 27 17:37:49.862280 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:37:49.862306 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:37:49.862316 kernel: BIOS-provided physical RAM map: May 27 17:37:49.862324 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 17:37:49.862331 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 17:37:49.862339 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 17:37:49.862347 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 27 17:37:49.862357 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 27 17:37:49.862364 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 17:37:49.862372 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 17:37:49.862379 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:37:49.862386 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 17:37:49.862394 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:37:49.862403 kernel: NX (Execute Disable) protection: active May 27 17:37:49.862414 kernel: APIC: Static calls initialized May 27 17:37:49.862423 kernel: SMBIOS 2.8 present. May 27 17:37:49.862432 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 27 17:37:49.862442 kernel: DMI: Memory slots populated: 1/1 May 27 17:37:49.862451 kernel: Hypervisor detected: KVM May 27 17:37:49.862460 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:37:49.862468 kernel: kvm-clock: using sched offset of 3436192678 cycles May 27 17:37:49.862478 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:37:49.862488 kernel: tsc: Detected 2794.748 MHz processor May 27 17:37:49.862498 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:37:49.862510 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:37:49.862520 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 27 17:37:49.862529 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 17:37:49.862539 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:37:49.862548 kernel: Using GB pages for direct mapping May 27 17:37:49.862557 kernel: ACPI: Early table checksum verification disabled May 27 17:37:49.862566 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 27 17:37:49.862576 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:37:49.862588 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:37:49.862598 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:37:49.862607 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 27 17:37:49.862626 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:37:49.862635 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:37:49.862645 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:37:49.862654 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:37:49.862664 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] May 27 17:37:49.862680 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] May 27 17:37:49.862690 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 27 17:37:49.862699 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] May 27 17:37:49.862709 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] May 27 17:37:49.862718 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] May 27 17:37:49.862728 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] May 27 17:37:49.862740 kernel: No NUMA configuration found May 27 17:37:49.862749 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 27 17:37:49.862759 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 27 17:37:49.862769 kernel: Zone ranges: May 27 17:37:49.862779 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:37:49.862788 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 27 17:37:49.862798 kernel: Normal empty May 27 17:37:49.862808 kernel: Device empty May 27 17:37:49.862817 kernel: Movable zone start for each node May 27 17:37:49.862827 kernel: Early memory node ranges May 27 17:37:49.862839 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 17:37:49.862849 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 27 17:37:49.862860 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 27 17:37:49.862870 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:37:49.862879 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 17:37:49.862889 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 27 17:37:49.862899 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 17:37:49.862909 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:37:49.862918 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:37:49.862931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 17:37:49.862941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:37:49.862966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:37:49.862975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:37:49.862984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:37:49.862994 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:37:49.863004 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 17:37:49.863015 kernel: TSC deadline timer available May 27 17:37:49.863026 kernel: CPU topo: Max. logical packages: 1 May 27 17:37:49.863042 kernel: CPU topo: Max. logical dies: 1 May 27 17:37:49.863051 kernel: CPU topo: Max. dies per package: 1 May 27 17:37:49.863060 kernel: CPU topo: Max. threads per core: 1 May 27 17:37:49.863070 kernel: CPU topo: Num. cores per package: 4 May 27 17:37:49.863080 kernel: CPU topo: Num. threads per package: 4 May 27 17:37:49.863090 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 17:37:49.863099 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 17:37:49.863109 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 17:37:49.863118 kernel: kvm-guest: setup PV sched yield May 27 17:37:49.863128 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 17:37:49.863141 kernel: Booting paravirtualized kernel on KVM May 27 17:37:49.863151 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:37:49.863161 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 17:37:49.863170 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 17:37:49.863180 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 17:37:49.863190 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 17:37:49.863199 kernel: kvm-guest: PV spinlocks enabled May 27 17:37:49.863209 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:37:49.863220 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:37:49.863234 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:37:49.863244 kernel: random: crng init done May 27 17:37:49.863253 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:37:49.863263 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:37:49.863273 kernel: Fallback order for Node 0: 0 May 27 17:37:49.863295 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 27 17:37:49.863305 kernel: Policy zone: DMA32 May 27 17:37:49.863324 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:37:49.863356 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 17:37:49.863366 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:37:49.863376 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:37:49.863386 kernel: Dynamic Preempt: voluntary May 27 17:37:49.863395 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:37:49.863406 kernel: rcu: RCU event tracing is enabled. May 27 17:37:49.863426 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 17:37:49.863445 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:37:49.863455 kernel: Rude variant of Tasks RCU enabled. May 27 17:37:49.863469 kernel: Tracing variant of Tasks RCU enabled. May 27 17:37:49.863479 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:37:49.863489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 17:37:49.863499 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:37:49.863509 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:37:49.863518 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:37:49.863527 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 17:37:49.863538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:37:49.863559 kernel: Console: colour VGA+ 80x25 May 27 17:37:49.863568 kernel: printk: legacy console [ttyS0] enabled May 27 17:37:49.863577 kernel: ACPI: Core revision 20240827 May 27 17:37:49.863585 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 17:37:49.863595 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:37:49.863602 kernel: x2apic enabled May 27 17:37:49.863610 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:37:49.863628 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 17:37:49.863636 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 17:37:49.863646 kernel: kvm-guest: setup PV IPIs May 27 17:37:49.863655 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 17:37:49.863662 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:37:49.863670 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 17:37:49.863678 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 17:37:49.863685 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 17:37:49.863693 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 17:37:49.863700 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:37:49.863708 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:37:49.863717 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:37:49.863725 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 17:37:49.863733 kernel: RETBleed: Mitigation: untrained return thunk May 27 17:37:49.863741 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 17:37:49.863748 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 17:37:49.863756 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 17:37:49.863764 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 17:37:49.863772 kernel: x86/bugs: return thunk changed May 27 17:37:49.863781 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 17:37:49.863789 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:37:49.863797 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:37:49.863804 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:37:49.863812 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:37:49.863819 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 17:37:49.863827 kernel: Freeing SMP alternatives memory: 32K May 27 17:37:49.863834 kernel: pid_max: default: 32768 minimum: 301 May 27 17:37:49.863842 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:37:49.863851 kernel: landlock: Up and running. May 27 17:37:49.863859 kernel: SELinux: Initializing. May 27 17:37:49.863866 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:37:49.863874 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:37:49.863882 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 17:37:49.863889 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 17:37:49.863897 kernel: ... version: 0 May 27 17:37:49.863904 kernel: ... bit width: 48 May 27 17:37:49.863912 kernel: ... generic registers: 6 May 27 17:37:49.863922 kernel: ... value mask: 0000ffffffffffff May 27 17:37:49.863929 kernel: ... max period: 00007fffffffffff May 27 17:37:49.863937 kernel: ... fixed-purpose events: 0 May 27 17:37:49.863944 kernel: ... event mask: 000000000000003f May 27 17:37:49.863973 kernel: signal: max sigframe size: 1776 May 27 17:37:49.863981 kernel: rcu: Hierarchical SRCU implementation. May 27 17:37:49.863989 kernel: rcu: Max phase no-delay instances is 400. May 27 17:37:49.863996 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:37:49.864004 kernel: smp: Bringing up secondary CPUs ... May 27 17:37:49.864014 kernel: smpboot: x86: Booting SMP configuration: May 27 17:37:49.864021 kernel: .... node #0, CPUs: #1 #2 #3 May 27 17:37:49.864029 kernel: smp: Brought up 1 node, 4 CPUs May 27 17:37:49.864036 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 17:37:49.864044 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 136904K reserved, 0K cma-reserved) May 27 17:37:49.864052 kernel: devtmpfs: initialized May 27 17:37:49.864059 kernel: x86/mm: Memory block size: 128MB May 27 17:37:49.864067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:37:49.864075 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 17:37:49.864085 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:37:49.864094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:37:49.864102 kernel: audit: initializing netlink subsys (disabled) May 27 17:37:49.864110 kernel: audit: type=2000 audit(1748367467.111:1): state=initialized audit_enabled=0 res=1 May 27 17:37:49.864119 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:37:49.864127 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:37:49.864136 kernel: cpuidle: using governor menu May 27 17:37:49.864145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:37:49.864152 kernel: dca service started, version 1.12.1 May 27 17:37:49.864162 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 17:37:49.864169 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 17:37:49.864177 kernel: PCI: Using configuration type 1 for base access May 27 17:37:49.864185 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:37:49.864192 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:37:49.864200 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:37:49.864208 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:37:49.864215 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:37:49.864223 kernel: ACPI: Added _OSI(Module Device) May 27 17:37:49.864232 kernel: ACPI: Added _OSI(Processor Device) May 27 17:37:49.864240 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:37:49.864247 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:37:49.864255 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:37:49.864262 kernel: ACPI: Interpreter enabled May 27 17:37:49.864270 kernel: ACPI: PM: (supports S0 S3 S5) May 27 17:37:49.864277 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:37:49.864285 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:37:49.864293 kernel: PCI: Using E820 reservations for host bridge windows May 27 17:37:49.864302 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 17:37:49.864310 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:37:49.864488 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:37:49.864610 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 17:37:49.864738 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 17:37:49.864748 kernel: PCI host bridge to bus 0000:00 May 27 17:37:49.864894 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:37:49.865034 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:37:49.865160 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:37:49.865269 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 27 17:37:49.865374 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 17:37:49.865477 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 27 17:37:49.865582 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:37:49.865725 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 17:37:49.865855 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 17:37:49.865986 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 17:37:49.866118 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 17:37:49.866235 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 17:37:49.866475 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 17:37:49.866628 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:37:49.866752 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 27 17:37:49.866868 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 17:37:49.867000 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 17:37:49.867139 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 17:37:49.867255 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 27 17:37:49.867370 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 17:37:49.867484 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 17:37:49.867612 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:37:49.867740 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 27 17:37:49.867854 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 27 17:37:49.867982 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 27 17:37:49.868104 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 17:37:49.868227 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 17:37:49.868342 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 17:37:49.868471 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 17:37:49.868586 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 27 17:37:49.868721 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 27 17:37:49.868846 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 17:37:49.868977 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 17:37:49.868988 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:37:49.869000 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:37:49.869007 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:37:49.869015 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:37:49.869023 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 17:37:49.869033 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 17:37:49.869041 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 17:37:49.869050 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 17:37:49.869058 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 17:37:49.869066 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 17:37:49.869075 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 17:37:49.869083 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 17:37:49.869090 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 17:37:49.869098 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 17:37:49.869105 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 17:37:49.869113 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 17:37:49.869120 kernel: iommu: Default domain type: Translated May 27 17:37:49.869128 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:37:49.869135 kernel: PCI: Using ACPI for IRQ routing May 27 17:37:49.869143 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:37:49.869152 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 17:37:49.869160 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 27 17:37:49.869276 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 17:37:49.869389 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 17:37:49.869502 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 17:37:49.869512 kernel: vgaarb: loaded May 27 17:37:49.869520 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 17:37:49.869527 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 17:37:49.869538 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:37:49.869546 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:37:49.869554 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:37:49.869561 kernel: pnp: PnP ACPI init May 27 17:37:49.869695 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 17:37:49.869706 kernel: pnp: PnP ACPI: found 6 devices May 27 17:37:49.869714 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:37:49.869722 kernel: NET: Registered PF_INET protocol family May 27 17:37:49.869732 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:37:49.869740 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:37:49.869748 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:37:49.869755 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:37:49.869763 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:37:49.869771 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:37:49.869778 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:37:49.869786 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:37:49.869794 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:37:49.869803 kernel: NET: Registered PF_XDP protocol family May 27 17:37:49.869910 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:37:49.870032 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:37:49.870160 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:37:49.870267 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 27 17:37:49.870371 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 17:37:49.870478 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 27 17:37:49.870488 kernel: PCI: CLS 0 bytes, default 64 May 27 17:37:49.870500 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:37:49.870508 kernel: Initialise system trusted keyrings May 27 17:37:49.870515 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:37:49.870523 kernel: Key type asymmetric registered May 27 17:37:49.870531 kernel: Asymmetric key parser 'x509' registered May 27 17:37:49.870538 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:37:49.870546 kernel: io scheduler mq-deadline registered May 27 17:37:49.870554 kernel: io scheduler kyber registered May 27 17:37:49.870561 kernel: io scheduler bfq registered May 27 17:37:49.870571 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:37:49.870579 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 17:37:49.870587 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 17:37:49.870594 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 17:37:49.870602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:37:49.870610 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:37:49.870627 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:37:49.870635 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:37:49.870643 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:37:49.870768 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 17:37:49.870780 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 17:37:49.870887 kernel: rtc_cmos 00:04: registered as rtc0 May 27 17:37:49.871030 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T17:37:49 UTC (1748367469) May 27 17:37:49.871154 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 17:37:49.871173 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 17:37:49.871181 kernel: NET: Registered PF_INET6 protocol family May 27 17:37:49.871188 kernel: Segment Routing with IPv6 May 27 17:37:49.871200 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:37:49.871208 kernel: NET: Registered PF_PACKET protocol family May 27 17:37:49.871215 kernel: Key type dns_resolver registered May 27 17:37:49.871223 kernel: IPI shorthand broadcast: enabled May 27 17:37:49.871230 kernel: sched_clock: Marking stable (3212006675, 145892422)->(3379401625, -21502528) May 27 17:37:49.871238 kernel: registered taskstats version 1 May 27 17:37:49.871245 kernel: Loading compiled-in X.509 certificates May 27 17:37:49.871253 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:37:49.871265 kernel: Demotion targets for Node 0: null May 27 17:37:49.871320 kernel: Key type .fscrypt registered May 27 17:37:49.871328 kernel: Key type fscrypt-provisioning registered May 27 17:37:49.871336 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:37:49.871344 kernel: ima: Allocated hash algorithm: sha1 May 27 17:37:49.871351 kernel: ima: No architecture policies found May 27 17:37:49.871359 kernel: clk: Disabling unused clocks May 27 17:37:49.871366 kernel: Warning: unable to open an initial console. May 27 17:37:49.871374 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:37:49.871382 kernel: Write protecting the kernel read-only data: 24576k May 27 17:37:49.871392 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:37:49.871399 kernel: Run /init as init process May 27 17:37:49.871407 kernel: with arguments: May 27 17:37:49.871414 kernel: /init May 27 17:37:49.871422 kernel: with environment: May 27 17:37:49.871429 kernel: HOME=/ May 27 17:37:49.871437 kernel: TERM=linux May 27 17:37:49.871444 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:37:49.871453 systemd[1]: Successfully made /usr/ read-only. May 27 17:37:49.871474 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:37:49.871485 systemd[1]: Detected virtualization kvm. May 27 17:37:49.871493 systemd[1]: Detected architecture x86-64. May 27 17:37:49.871501 systemd[1]: Running in initrd. May 27 17:37:49.871509 systemd[1]: No hostname configured, using default hostname. May 27 17:37:49.871520 systemd[1]: Hostname set to . May 27 17:37:49.871528 systemd[1]: Initializing machine ID from VM UUID. May 27 17:37:49.871536 systemd[1]: Queued start job for default target initrd.target. May 27 17:37:49.871544 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:37:49.871553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:37:49.871562 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:37:49.871570 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:37:49.871578 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:37:49.871589 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:37:49.871599 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:37:49.871607 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:37:49.871625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:37:49.871633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:37:49.871641 systemd[1]: Reached target paths.target - Path Units. May 27 17:37:49.871651 systemd[1]: Reached target slices.target - Slice Units. May 27 17:37:49.871661 systemd[1]: Reached target swap.target - Swaps. May 27 17:37:49.871669 systemd[1]: Reached target timers.target - Timer Units. May 27 17:37:49.871677 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:37:49.871685 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:37:49.871694 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:37:49.871702 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:37:49.871713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:37:49.871721 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:37:49.871731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:37:49.871739 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:37:49.871748 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:37:49.871756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:37:49.871764 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:37:49.871775 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:37:49.871786 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:37:49.871794 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:37:49.871802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:37:49.871811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:37:49.871819 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:37:49.871830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:37:49.871838 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:37:49.871847 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:37:49.871873 systemd-journald[220]: Collecting audit messages is disabled. May 27 17:37:49.871895 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:37:49.871904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:37:49.871913 systemd-journald[220]: Journal started May 27 17:37:49.871932 systemd-journald[220]: Runtime Journal (/run/log/journal/58f8c02f1e754aa58088871ed2dc6cc9) is 6M, max 48.6M, 42.5M free. May 27 17:37:49.868651 systemd-modules-load[222]: Inserted module 'overlay' May 27 17:37:49.873168 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:37:49.881497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:37:49.916334 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:37:49.916379 kernel: Bridge firewalling registered May 27 17:37:49.900531 systemd-modules-load[222]: Inserted module 'br_netfilter' May 27 17:37:49.917540 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:37:49.919206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:37:49.920232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:37:49.934773 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:37:49.939133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:37:49.941728 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:37:49.954125 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:37:49.965499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:37:49.967895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:37:49.971779 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:37:49.979773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:37:49.999433 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:37:50.019389 systemd-resolved[257]: Positive Trust Anchors: May 27 17:37:50.019407 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:37:50.019439 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:37:50.022033 systemd-resolved[257]: Defaulting to hostname 'linux'. May 27 17:37:50.023107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:37:50.029344 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:37:50.112984 kernel: SCSI subsystem initialized May 27 17:37:50.122971 kernel: Loading iSCSI transport class v2.0-870. May 27 17:37:50.134016 kernel: iscsi: registered transport (tcp) May 27 17:37:50.154986 kernel: iscsi: registered transport (qla4xxx) May 27 17:37:50.155060 kernel: QLogic iSCSI HBA Driver May 27 17:37:50.174720 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:37:50.196047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:37:50.199946 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:37:50.249653 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:37:50.253179 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:37:50.319992 kernel: raid6: avx2x4 gen() 18112 MB/s May 27 17:37:50.336971 kernel: raid6: avx2x2 gen() 28807 MB/s May 27 17:37:50.354094 kernel: raid6: avx2x1 gen() 25693 MB/s May 27 17:37:50.354124 kernel: raid6: using algorithm avx2x2 gen() 28807 MB/s May 27 17:37:50.372166 kernel: raid6: .... xor() 19590 MB/s, rmw enabled May 27 17:37:50.372247 kernel: raid6: using avx2x2 recovery algorithm May 27 17:37:50.393986 kernel: xor: automatically using best checksumming function avx May 27 17:37:50.572001 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:37:50.580312 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:37:50.581984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:37:50.621925 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 27 17:37:50.628066 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:37:50.630387 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:37:50.661708 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation May 27 17:37:50.691938 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:37:50.693839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:37:50.784708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:37:50.787762 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:37:50.822001 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 17:37:50.833125 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 17:37:50.837229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:37:50.837257 kernel: GPT:9289727 != 19775487 May 27 17:37:50.837271 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:37:50.838421 kernel: GPT:9289727 != 19775487 May 27 17:37:50.838442 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:37:50.839977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:37:50.849044 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:37:50.857570 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 17:37:50.857651 kernel: libata version 3.00 loaded. May 27 17:37:50.859985 kernel: AES CTR mode by8 optimization enabled May 27 17:37:50.882308 kernel: ahci 0000:00:1f.2: version 3.0 May 27 17:37:50.882534 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 17:37:50.886347 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 17:37:50.886521 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 17:37:50.886669 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 17:37:50.891212 kernel: scsi host0: ahci May 27 17:37:50.891460 kernel: scsi host1: ahci May 27 17:37:50.892265 kernel: scsi host2: ahci May 27 17:37:50.892991 kernel: scsi host3: ahci May 27 17:37:50.906494 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:37:50.926217 kernel: scsi host4: ahci May 27 17:37:50.926478 kernel: scsi host5: ahci May 27 17:37:50.926641 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 27 17:37:50.926654 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 27 17:37:50.926666 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 27 17:37:50.927993 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 27 17:37:50.928028 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 27 17:37:50.930176 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 27 17:37:50.938376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:37:50.945787 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:37:50.957657 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:37:50.972572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:37:50.981116 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:37:50.982455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:37:50.982521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:37:50.986093 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:37:51.013736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:37:51.015316 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:37:51.140238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:37:51.165846 disk-uuid[629]: Primary Header is updated. May 27 17:37:51.165846 disk-uuid[629]: Secondary Entries is updated. May 27 17:37:51.165846 disk-uuid[629]: Secondary Header is updated. May 27 17:37:51.171079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:37:51.189980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:37:51.236981 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 17:37:51.237040 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 17:37:51.237989 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 17:37:51.239175 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 17:37:51.239196 kernel: ata3.00: applying bridge limits May 27 17:37:51.240467 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 17:37:51.240973 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 17:37:51.242093 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 17:37:51.243979 kernel: ata3.00: configured for UDMA/100 May 27 17:37:51.247997 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 17:37:51.298231 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 17:37:51.298575 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:37:51.329275 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 17:37:51.740667 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:37:51.743549 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:37:51.759075 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:37:51.761637 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:37:51.764739 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:37:51.788457 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:37:52.193503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:37:52.194584 disk-uuid[635]: The operation has completed successfully. May 27 17:37:52.225844 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:37:52.225980 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:37:52.259462 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:37:52.284510 sh[665]: Success May 27 17:37:52.304319 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:37:52.304349 kernel: device-mapper: uevent: version 1.0.3 May 27 17:37:52.305540 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:37:52.315973 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 17:37:52.347860 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:37:52.349810 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:37:52.363904 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:37:52.373227 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:37:52.373256 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) May 27 17:37:52.374796 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:37:52.375852 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:37:52.375869 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:37:52.381677 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:37:52.383871 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:37:52.386208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:37:52.388863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:37:52.391500 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:37:52.424986 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (710) May 27 17:37:52.428124 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:37:52.428176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:37:52.428187 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:37:52.436991 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:37:52.438107 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:37:52.440684 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:37:52.541148 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:37:52.546584 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:37:52.648078 ignition[759]: Ignition 2.21.0 May 27 17:37:52.648114 ignition[759]: Stage: fetch-offline May 27 17:37:52.648171 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 27 17:37:52.648183 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:37:52.648415 ignition[759]: parsed url from cmdline: "" May 27 17:37:52.648429 ignition[759]: no config URL provided May 27 17:37:52.648455 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:37:52.648471 ignition[759]: no config at "/usr/lib/ignition/user.ign" May 27 17:37:52.648499 ignition[759]: op(1): [started] loading QEMU firmware config module May 27 17:37:52.655697 systemd-networkd[851]: lo: Link UP May 27 17:37:52.648504 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 17:37:52.655701 systemd-networkd[851]: lo: Gained carrier May 27 17:37:52.657908 systemd-networkd[851]: Enumeration completed May 27 17:37:52.658032 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:37:52.658678 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:37:52.658684 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:37:52.659373 systemd-networkd[851]: eth0: Link UP May 27 17:37:52.659377 systemd-networkd[851]: eth0: Gained carrier May 27 17:37:52.659386 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:37:52.661597 systemd[1]: Reached target network.target - Network. May 27 17:37:52.680006 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:37:52.680235 ignition[759]: op(1): [finished] loading QEMU firmware config module May 27 17:37:52.721756 ignition[759]: parsing config with SHA512: e8418f8eef1f5c871a751e095e92f2727bcc9823450af340538057b91d624dca0bf0571ab2536a6e384403fae7916aba119457eb49010c1736fc2e8081ce9469 May 27 17:37:52.726303 unknown[759]: fetched base config from "system" May 27 17:37:52.726317 unknown[759]: fetched user config from "qemu" May 27 17:37:52.726877 ignition[759]: fetch-offline: fetch-offline passed May 27 17:37:52.726996 ignition[759]: Ignition finished successfully May 27 17:37:52.730670 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:37:52.732191 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 17:37:52.733284 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:37:52.833917 ignition[859]: Ignition 2.21.0 May 27 17:37:52.833939 ignition[859]: Stage: kargs May 27 17:37:52.834196 ignition[859]: no configs at "/usr/lib/ignition/base.d" May 27 17:37:52.834212 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:37:52.835224 ignition[859]: kargs: kargs passed May 27 17:37:52.835278 ignition[859]: Ignition finished successfully May 27 17:37:52.840062 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:37:52.842441 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:37:52.894230 ignition[867]: Ignition 2.21.0 May 27 17:37:52.894243 ignition[867]: Stage: disks May 27 17:37:52.894435 ignition[867]: no configs at "/usr/lib/ignition/base.d" May 27 17:37:52.894446 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:37:52.898636 ignition[867]: disks: disks passed May 27 17:37:52.898700 ignition[867]: Ignition finished successfully May 27 17:37:52.901440 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:37:52.902828 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:37:52.904804 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:37:52.906241 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:37:52.908535 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:37:52.909693 systemd[1]: Reached target basic.target - Basic System. May 27 17:37:52.913182 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:37:52.956002 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:37:52.988335 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:37:52.992668 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:37:53.110992 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:37:53.111261 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:37:53.111979 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:37:53.115405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:37:53.117606 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:37:53.119085 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:37:53.119151 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:37:53.119186 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:37:53.139220 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:37:53.141928 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:37:53.148628 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (885) May 27 17:37:53.148651 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:37:53.148662 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:37:53.148671 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:37:53.154040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:37:53.197963 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:37:53.203551 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory May 27 17:37:53.209270 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:37:53.214857 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:37:53.361941 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:37:53.364531 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:37:53.365326 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:37:53.383597 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:37:53.385451 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:37:53.400491 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:37:53.497432 ignition[999]: INFO : Ignition 2.21.0 May 27 17:37:53.497432 ignition[999]: INFO : Stage: mount May 27 17:37:53.499232 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:37:53.499232 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:37:53.503864 ignition[999]: INFO : mount: mount passed May 27 17:37:53.504767 ignition[999]: INFO : Ignition finished successfully May 27 17:37:53.507346 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:37:53.510354 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:37:53.533727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:37:53.556518 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1011) May 27 17:37:53.556582 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:37:53.556597 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:37:53.557412 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:37:53.561707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:37:53.613069 ignition[1028]: INFO : Ignition 2.21.0 May 27 17:37:53.613069 ignition[1028]: INFO : Stage: files May 27 17:37:53.614992 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:37:53.614992 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:37:53.617350 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping May 27 17:37:53.617350 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:37:53.617350 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:37:53.621880 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:37:53.621880 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:37:53.621880 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:37:53.621880 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:37:53.621880 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 17:37:53.619216 unknown[1028]: wrote ssh authorized keys file for user: core May 27 17:37:53.658884 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:37:53.887105 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:37:53.887105 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:37:53.891224 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:37:53.891224 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:37:53.891224 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:37:53.891224 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:37:53.891224 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:37:53.891224 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:37:53.891224 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:37:53.911191 systemd-networkd[851]: eth0: Gained IPv6LL May 27 17:37:53.913035 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:37:53.913035 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:37:53.913035 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:37:53.919974 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:37:53.919974 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:37:53.919974 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 17:37:54.474700 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:37:55.044103 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:37:55.044103 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:37:55.049332 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:37:55.055892 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:37:55.055892 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:37:55.055892 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 17:37:55.061671 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:37:55.061671 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:37:55.061671 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 17:37:55.061671 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 17:37:55.096054 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:37:55.103841 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:37:55.106027 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 17:37:55.106027 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 17:37:55.106027 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:37:55.106027 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:37:55.106027 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:37:55.106027 ignition[1028]: INFO : files: files passed May 27 17:37:55.106027 ignition[1028]: INFO : Ignition finished successfully May 27 17:37:55.111402 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:37:55.117336 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:37:55.122671 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:37:55.225153 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:37:55.226263 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:37:55.228754 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory May 27 17:37:55.230518 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:37:55.230518 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:37:55.235390 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:37:55.233405 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:37:55.235943 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:37:55.239343 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:37:55.305816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:37:55.305995 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:37:55.309155 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:37:55.311817 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:37:55.313015 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:37:55.314129 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:37:55.346439 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:37:55.348229 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:37:55.381290 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:37:55.382754 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:37:55.383269 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:37:55.383587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:37:55.383732 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:37:55.384803 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:37:55.385328 systemd[1]: Stopped target basic.target - Basic System. May 27 17:37:55.385693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:37:55.386007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:37:55.386516 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:37:55.386904 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:37:55.387413 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:37:55.387797 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:37:55.388327 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:37:55.388661 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:37:55.389045 systemd[1]: Stopped target swap.target - Swaps. May 27 17:37:55.389582 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:37:55.389748 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:37:55.414219 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:37:55.416422 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:37:55.417496 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:37:55.419736 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:37:55.420798 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:37:55.420930 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:37:55.426073 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:37:55.426230 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:37:55.427377 systemd[1]: Stopped target paths.target - Path Units. May 27 17:37:55.430300 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:37:55.435054 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:37:55.435230 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:37:55.438779 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:37:55.440656 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:37:55.440748 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:37:55.441633 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:37:55.441709 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:37:55.443370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:37:55.443481 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:37:55.445243 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:37:55.445347 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:37:55.448197 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:37:55.449984 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:37:55.453431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:37:55.453617 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:37:55.454709 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:37:55.454837 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:37:55.463219 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:37:55.470172 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:37:55.492043 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:37:55.545864 ignition[1083]: INFO : Ignition 2.21.0 May 27 17:37:55.545864 ignition[1083]: INFO : Stage: umount May 27 17:37:55.547787 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:37:55.547787 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:37:55.547787 ignition[1083]: INFO : umount: umount passed May 27 17:37:55.547787 ignition[1083]: INFO : Ignition finished successfully May 27 17:37:55.549671 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:37:55.549839 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:37:55.552462 systemd[1]: Stopped target network.target - Network. May 27 17:37:55.553519 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:37:55.553596 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:37:55.554519 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:37:55.554574 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:37:55.554845 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:37:55.554905 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:37:55.555354 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:37:55.555409 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:37:55.556081 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:37:55.562678 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:37:55.568333 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:37:55.568463 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:37:55.575943 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:37:55.576203 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:37:55.576316 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:37:55.581023 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:37:55.582412 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:37:55.583716 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:37:55.583760 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:37:55.587128 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:37:55.588241 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:37:55.588291 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:37:55.588672 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:37:55.588715 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:37:55.594771 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:37:55.594819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:37:55.595864 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:37:55.595917 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:37:55.600587 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:37:55.605240 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:37:55.605313 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:37:55.622947 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:37:55.623197 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:37:55.645331 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:37:55.645449 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:37:55.647119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:37:55.647200 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:37:55.649604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:37:55.649645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:37:55.650637 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:37:55.650687 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:37:55.655631 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:37:55.655684 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:37:55.660413 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:37:55.660465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:37:55.687812 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:37:55.688930 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:37:55.688993 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:37:55.692592 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:37:55.692654 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:37:55.697597 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:37:55.697656 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:37:55.699024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:37:55.699077 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:37:55.700113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:37:55.700167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:37:55.706140 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:37:55.706196 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:37:55.706240 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:37:55.706286 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:37:55.722149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:37:55.722260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:37:55.876280 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:37:55.876515 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:37:55.906915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:37:55.907330 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:37:55.907401 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:37:55.911486 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:37:55.936516 systemd[1]: Switching root. May 27 17:37:55.968326 systemd-journald[220]: Journal stopped May 27 17:37:57.340289 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 17:37:57.340353 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:37:57.340367 kernel: SELinux: policy capability open_perms=1 May 27 17:37:57.340379 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:37:57.340390 kernel: SELinux: policy capability always_check_network=0 May 27 17:37:57.340401 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:37:57.340413 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:37:57.340430 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:37:57.340453 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:37:57.340470 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:37:57.340482 kernel: audit: type=1403 audit(1748367476.517:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:37:57.340499 systemd[1]: Successfully loaded SELinux policy in 59.061ms. May 27 17:37:57.340521 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.396ms. May 27 17:37:57.340534 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:37:57.340547 systemd[1]: Detected virtualization kvm. May 27 17:37:57.340559 systemd[1]: Detected architecture x86-64. May 27 17:37:57.340571 systemd[1]: Detected first boot. May 27 17:37:57.340585 systemd[1]: Initializing machine ID from VM UUID. May 27 17:37:57.340596 zram_generator::config[1128]: No configuration found. May 27 17:37:57.340621 kernel: Guest personality initialized and is inactive May 27 17:37:57.340632 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:37:57.340643 kernel: Initialized host personality May 27 17:37:57.340654 kernel: NET: Registered PF_VSOCK protocol family May 27 17:37:57.340666 systemd[1]: Populated /etc with preset unit settings. May 27 17:37:57.340684 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:37:57.340696 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:37:57.340710 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:37:57.340722 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:37:57.340734 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:37:57.340746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:37:57.340758 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:37:57.340770 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:37:57.340782 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:37:57.340794 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:37:57.340809 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:37:57.340825 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:37:57.340841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:37:57.340853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:37:57.340865 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:37:57.340877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:37:57.340890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:37:57.340905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:37:57.340917 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:37:57.340930 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:37:57.340942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:37:57.340969 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:37:57.340981 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:37:57.340994 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:37:57.341006 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:37:57.341018 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:37:57.341030 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:37:57.341044 systemd[1]: Reached target slices.target - Slice Units. May 27 17:37:57.341056 systemd[1]: Reached target swap.target - Swaps. May 27 17:37:57.341068 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:37:57.341079 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:37:57.341091 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:37:57.341103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:37:57.341116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:37:57.341128 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:37:57.341142 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:37:57.341157 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:37:57.341168 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:37:57.341180 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:37:57.341192 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:37:57.341204 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:37:57.341216 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:37:57.341227 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:37:57.341240 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:37:57.341254 systemd[1]: Reached target machines.target - Containers. May 27 17:37:57.341266 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:37:57.341277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:37:57.341289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:37:57.341301 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:37:57.341313 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:37:57.341325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:37:57.341337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:37:57.341349 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:37:57.341363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:37:57.341375 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:37:57.341387 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:37:57.341398 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:37:57.341410 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:37:57.341422 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:37:57.341435 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:37:57.341455 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:37:57.341470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:37:57.341482 kernel: fuse: init (API version 7.41) May 27 17:37:57.341493 kernel: loop: module loaded May 27 17:37:57.341505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:37:57.341517 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:37:57.341529 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:37:57.341543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:37:57.341555 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:37:57.341567 systemd[1]: Stopped verity-setup.service. May 27 17:37:57.341580 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:37:57.341594 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:37:57.341607 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:37:57.341619 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:37:57.341631 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:37:57.341644 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:37:57.341655 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:37:57.341667 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:37:57.341679 kernel: ACPI: bus type drm_connector registered May 27 17:37:57.341690 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:37:57.341725 systemd-journald[1203]: Collecting audit messages is disabled. May 27 17:37:57.341753 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:37:57.341776 systemd-journald[1203]: Journal started May 27 17:37:57.341801 systemd-journald[1203]: Runtime Journal (/run/log/journal/58f8c02f1e754aa58088871ed2dc6cc9) is 6M, max 48.6M, 42.5M free. May 27 17:37:57.083158 systemd[1]: Queued start job for default target multi-user.target. May 27 17:37:57.104916 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:37:57.105372 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:37:57.343359 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:37:57.346974 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:37:57.348128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:37:57.348353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:37:57.350073 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:37:57.350297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:37:57.351990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:37:57.352207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:37:57.354057 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:37:57.354271 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:37:57.355859 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:37:57.356090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:37:57.357723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:37:57.359391 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:37:57.361236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:37:57.363113 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:37:57.377699 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:37:57.380691 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:37:57.383127 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:37:57.384640 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:37:57.384681 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:37:57.386937 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:37:57.393966 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:37:57.396918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:37:57.399555 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:37:57.402628 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:37:57.405090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:37:57.406688 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:37:57.408136 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:37:57.410179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:37:57.415072 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:37:57.423526 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:37:57.428112 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:37:57.429993 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:37:57.458498 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:37:57.463660 systemd-journald[1203]: Time spent on flushing to /var/log/journal/58f8c02f1e754aa58088871ed2dc6cc9 is 17.985ms for 984 entries. May 27 17:37:57.463660 systemd-journald[1203]: System Journal (/var/log/journal/58f8c02f1e754aa58088871ed2dc6cc9) is 8M, max 195.6M, 187.6M free. May 27 17:37:57.497318 systemd-journald[1203]: Received client request to flush runtime journal. May 27 17:37:57.497362 kernel: loop0: detected capacity change from 0 to 146240 May 27 17:37:57.471670 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:37:57.474389 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:37:57.480974 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:37:57.482529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:37:57.483754 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. May 27 17:37:57.483767 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. May 27 17:37:57.492299 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:37:57.495550 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:37:57.499650 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:37:57.509993 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:37:57.513268 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:37:57.531989 kernel: loop1: detected capacity change from 0 to 113872 May 27 17:37:57.551104 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:37:57.554167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:37:57.562978 kernel: loop2: detected capacity change from 0 to 224512 May 27 17:37:57.588627 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 27 17:37:57.588650 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 27 17:37:57.594026 kernel: loop3: detected capacity change from 0 to 146240 May 27 17:37:57.595036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:37:57.729991 kernel: loop4: detected capacity change from 0 to 113872 May 27 17:37:57.739999 kernel: loop5: detected capacity change from 0 to 224512 May 27 17:37:57.750766 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 17:37:57.751511 (sd-merge)[1272]: Merged extensions into '/usr'. May 27 17:37:57.757894 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:37:57.758081 systemd[1]: Reloading... May 27 17:37:57.821029 zram_generator::config[1295]: No configuration found. May 27 17:37:57.977675 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:37:57.979566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:37:58.098186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:37:58.098503 systemd[1]: Reloading finished in 339 ms. May 27 17:37:58.127799 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:37:58.129505 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:37:58.157693 systemd[1]: Starting ensure-sysext.service... May 27 17:37:58.160693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:37:58.183636 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 27 17:37:58.183650 systemd[1]: Reloading... May 27 17:37:58.202108 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:37:58.202170 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:37:58.202465 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:37:58.202720 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:37:58.203599 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:37:58.203890 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 17:37:58.204081 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 17:37:58.211024 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:37:58.211100 systemd-tmpfiles[1337]: Skipping /boot May 27 17:37:58.227531 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:37:58.228362 systemd-tmpfiles[1337]: Skipping /boot May 27 17:37:58.246032 zram_generator::config[1367]: No configuration found. May 27 17:37:58.390007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:37:58.473651 systemd[1]: Reloading finished in 289 ms. May 27 17:37:58.502220 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:37:58.526122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:37:58.535574 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:37:58.538235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:37:58.540745 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:37:58.550086 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:37:58.553847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:37:58.559208 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:37:58.564359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:37:58.564593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:37:58.566907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:37:58.571044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:37:58.574293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:37:58.575462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:37:58.575598 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:37:58.579211 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:37:58.580931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:37:58.589557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:37:58.589823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:37:58.591991 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:37:58.593771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:37:58.595922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:37:58.596174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:37:58.597985 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:37:58.598206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:37:58.607873 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:37:58.608121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:37:58.610157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:37:58.614581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:37:58.620231 augenrules[1439]: No rules May 27 17:37:58.621501 systemd-udevd[1408]: Using default interface naming scheme 'v255'. May 27 17:37:58.623031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:37:58.628611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:37:58.629925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:37:58.630158 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:37:58.632018 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:37:58.633145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:37:58.634755 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:37:58.635188 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:37:58.636824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:37:58.637136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:37:58.638698 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:37:58.640823 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:37:58.644709 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:37:58.645067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:37:58.646695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:37:58.647261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:37:58.649279 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:37:58.649495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:37:58.656844 systemd[1]: Finished ensure-sysext.service. May 27 17:37:58.658198 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:37:58.659906 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:37:58.677583 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:37:58.678759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:37:58.678840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:37:58.685131 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:37:58.687024 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:37:58.789913 systemd-resolved[1406]: Positive Trust Anchors: May 27 17:37:58.790312 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:37:58.790349 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:37:58.795522 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 27 17:37:58.797740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:37:58.799874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:37:58.839667 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:37:58.845538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:37:58.849235 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:37:58.875278 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:37:58.881964 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:37:58.882002 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 17:37:58.887985 kernel: ACPI: button: Power Button [PWRF] May 27 17:37:58.918068 systemd-networkd[1486]: lo: Link UP May 27 17:37:58.918085 systemd-networkd[1486]: lo: Gained carrier May 27 17:37:58.919905 systemd-networkd[1486]: Enumeration completed May 27 17:37:58.920090 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:37:58.920612 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:37:58.920626 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:37:58.921806 systemd-networkd[1486]: eth0: Link UP May 27 17:37:58.922038 systemd[1]: Reached target network.target - Network. May 27 17:37:58.922102 systemd-networkd[1486]: eth0: Gained carrier May 27 17:37:58.922115 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:37:58.926104 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:37:58.934100 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:37:58.941003 systemd-networkd[1486]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:37:58.942838 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:37:58.942861 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. May 27 17:37:58.944454 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:37:58.945727 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:37:58.947119 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:37:58.948554 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:37:58.950427 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:37:58.951788 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:37:58.951818 systemd[1]: Reached target paths.target - Path Units. May 27 17:37:58.952769 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:37:58.954102 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:37:58.955360 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:37:58.956684 systemd[1]: Reached target timers.target - Timer Units. May 27 17:37:58.958725 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:37:58.961648 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:37:58.971135 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:37:58.972785 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:37:58.976052 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:37:59.852538 systemd-resolved[1406]: Clock change detected. Flushing caches. May 27 17:37:59.852830 systemd-timesyncd[1488]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 17:37:59.852885 systemd-timesyncd[1488]: Initial clock synchronization to Tue 2025-05-27 17:37:59.852474 UTC. May 27 17:37:59.857702 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:37:59.859418 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:37:59.861909 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:37:59.864295 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:37:59.884584 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 17:37:59.884981 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 17:37:59.892275 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:37:59.893717 systemd[1]: Reached target basic.target - Basic System. May 27 17:37:59.895083 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:37:59.895216 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:37:59.896985 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:37:59.899828 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:37:59.903952 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:37:59.906590 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:37:59.916501 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:37:59.918234 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:37:59.921101 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:37:59.926014 jq[1529]: false May 27 17:37:59.925961 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:37:59.928877 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:37:59.935999 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:37:59.943885 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:37:59.949989 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:37:59.952464 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:37:59.953229 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:37:59.956005 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:37:59.967021 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:37:59.967812 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing passwd entry cache May 27 17:37:59.966444 oslogin_cache_refresh[1531]: Refreshing passwd entry cache May 27 17:37:59.978293 oslogin_cache_refresh[1531]: Failure getting users, quitting May 27 17:37:59.984785 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting users, quitting May 27 17:37:59.984785 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:37:59.984785 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing group entry cache May 27 17:37:59.976576 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:37:59.978323 oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:37:59.980144 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:37:59.978388 oslogin_cache_refresh[1531]: Refreshing group entry cache May 27 17:37:59.980452 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:37:59.981057 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:37:59.981420 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:37:59.990618 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting groups, quitting May 27 17:37:59.990618 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:37:59.987125 oslogin_cache_refresh[1531]: Failure getting groups, quitting May 27 17:37:59.987148 oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:37:59.995638 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:37:59.998168 update_engine[1543]: I20250527 17:37:59.998082 1543 main.cc:92] Flatcar Update Engine starting May 27 17:37:59.998964 extend-filesystems[1530]: Found loop3 May 27 17:38:00.000176 extend-filesystems[1530]: Found loop4 May 27 17:38:00.000176 extend-filesystems[1530]: Found loop5 May 27 17:38:00.000176 extend-filesystems[1530]: Found sr0 May 27 17:38:00.000176 extend-filesystems[1530]: Found vda May 27 17:38:00.000176 extend-filesystems[1530]: Found vda1 May 27 17:38:00.000176 extend-filesystems[1530]: Found vda2 May 27 17:38:00.000176 extend-filesystems[1530]: Found vda3 May 27 17:38:00.000176 extend-filesystems[1530]: Found usr May 27 17:38:00.000176 extend-filesystems[1530]: Found vda4 May 27 17:38:00.000176 extend-filesystems[1530]: Found vda6 May 27 17:38:00.000176 extend-filesystems[1530]: Found vda7 May 27 17:38:00.000176 extend-filesystems[1530]: Found vda9 May 27 17:38:00.000176 extend-filesystems[1530]: Checking size of /dev/vda9 May 27 17:38:00.015904 extend-filesystems[1530]: Resized partition /dev/vda9 May 27 17:38:00.017174 extend-filesystems[1554]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:38:00.021630 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 17:38:00.030406 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:38:00.034113 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:38:00.035727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:38:00.041801 jq[1544]: true May 27 17:38:00.070133 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 17:38:00.070274 jq[1556]: true May 27 17:38:00.075243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:38:00.092095 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:38:00.098762 tar[1546]: linux-amd64/LICENSE May 27 17:38:00.100504 tar[1546]: linux-amd64/helm May 27 17:38:00.101150 extend-filesystems[1554]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 17:38:00.101150 extend-filesystems[1554]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:38:00.101150 extend-filesystems[1554]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 17:38:00.107120 extend-filesystems[1530]: Resized filesystem in /dev/vda9 May 27 17:38:00.103452 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:38:00.104385 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:38:00.124582 dbus-daemon[1527]: [system] SELinux support is enabled May 27 17:38:00.124794 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:38:00.126233 kernel: kvm_amd: TSC scaling supported May 27 17:38:00.126278 kernel: kvm_amd: Nested Virtualization enabled May 27 17:38:00.126291 kernel: kvm_amd: Nested Paging enabled May 27 17:38:00.127720 kernel: kvm_amd: LBR virtualization supported May 27 17:38:00.129292 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 17:38:00.129321 kernel: kvm_amd: Virtual GIF supported May 27 17:38:00.135233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:38:00.135293 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:38:00.137031 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:38:00.137109 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:38:00.157297 systemd[1]: Started update-engine.service - Update Engine. May 27 17:38:00.157506 update_engine[1543]: I20250527 17:38:00.157392 1543 update_check_scheduler.cc:74] Next update check in 11m34s May 27 17:38:00.162908 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:38:00.223711 bash[1587]: Updated "/home/core/.ssh/authorized_keys" May 27 17:38:00.226634 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:38:00.228317 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:38:00.244273 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:38:00.244715 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:38:00.245104 systemd-logind[1539]: New seat seat0. May 27 17:38:00.248835 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:38:00.298617 kernel: EDAC MC: Ver: 3.0.0 May 27 17:38:00.323880 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:38:00.364759 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:38:00.371230 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:38:00.378209 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:38:00.423706 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:38:00.423971 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:38:00.426386 containerd[1555]: time="2025-05-27T17:38:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:38:00.430881 containerd[1555]: time="2025-05-27T17:38:00.428530262Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:38:00.428891 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:38:00.437613 containerd[1555]: time="2025-05-27T17:38:00.437563072Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.638µs" May 27 17:38:00.437699 containerd[1555]: time="2025-05-27T17:38:00.437684049Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:38:00.437753 containerd[1555]: time="2025-05-27T17:38:00.437741897Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:38:00.438002 containerd[1555]: time="2025-05-27T17:38:00.437982438Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:38:00.438088 containerd[1555]: time="2025-05-27T17:38:00.438073068Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:38:00.438169 containerd[1555]: time="2025-05-27T17:38:00.438153920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:38:00.438285 containerd[1555]: time="2025-05-27T17:38:00.438269416Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:38:00.438335 containerd[1555]: time="2025-05-27T17:38:00.438323518Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:38:00.438689 containerd[1555]: time="2025-05-27T17:38:00.438669948Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:38:00.438764 containerd[1555]: time="2025-05-27T17:38:00.438750980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:38:00.438814 containerd[1555]: time="2025-05-27T17:38:00.438801194Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:38:00.438857 containerd[1555]: time="2025-05-27T17:38:00.438845998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:38:00.439004 containerd[1555]: time="2025-05-27T17:38:00.438988916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:38:00.439297 containerd[1555]: time="2025-05-27T17:38:00.439279110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:38:00.439382 containerd[1555]: time="2025-05-27T17:38:00.439362867Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:38:00.439436 containerd[1555]: time="2025-05-27T17:38:00.439422900Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:38:00.439532 containerd[1555]: time="2025-05-27T17:38:00.439516836Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:38:00.439974 containerd[1555]: time="2025-05-27T17:38:00.439956350Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:38:00.440115 containerd[1555]: time="2025-05-27T17:38:00.440100470Z" level=info msg="metadata content store policy set" policy=shared May 27 17:38:00.446255 containerd[1555]: time="2025-05-27T17:38:00.446188989Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:38:00.446360 containerd[1555]: time="2025-05-27T17:38:00.446301630Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:38:00.446360 containerd[1555]: time="2025-05-27T17:38:00.446330073Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:38:00.446360 containerd[1555]: time="2025-05-27T17:38:00.446345442Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:38:00.446436 containerd[1555]: time="2025-05-27T17:38:00.446361712Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:38:00.446436 containerd[1555]: time="2025-05-27T17:38:00.446375248Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:38:00.446436 containerd[1555]: time="2025-05-27T17:38:00.446394544Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:38:00.446436 containerd[1555]: time="2025-05-27T17:38:00.446412478Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:38:00.446436 containerd[1555]: time="2025-05-27T17:38:00.446426965Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:38:00.446436 containerd[1555]: time="2025-05-27T17:38:00.446439518Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:38:00.446649 containerd[1555]: time="2025-05-27T17:38:00.446451631Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:38:00.446649 containerd[1555]: time="2025-05-27T17:38:00.446469515Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:38:00.446716 containerd[1555]: time="2025-05-27T17:38:00.446660573Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:38:00.446716 containerd[1555]: time="2025-05-27T17:38:00.446686311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:38:00.446716 containerd[1555]: time="2025-05-27T17:38:00.446705457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:38:00.446786 containerd[1555]: time="2025-05-27T17:38:00.446718081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:38:00.446786 containerd[1555]: time="2025-05-27T17:38:00.446733389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:38:00.446786 containerd[1555]: time="2025-05-27T17:38:00.446751924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:38:00.446786 containerd[1555]: time="2025-05-27T17:38:00.446765670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:38:00.446786 containerd[1555]: time="2025-05-27T17:38:00.446782542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:38:00.446915 containerd[1555]: time="2025-05-27T17:38:00.446796608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:38:00.446915 containerd[1555]: time="2025-05-27T17:38:00.446809923Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:38:00.446915 containerd[1555]: time="2025-05-27T17:38:00.446823639Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:38:00.446915 containerd[1555]: time="2025-05-27T17:38:00.446910301Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:38:00.447030 containerd[1555]: time="2025-05-27T17:38:00.446930689Z" level=info msg="Start snapshots syncer" May 27 17:38:00.447030 containerd[1555]: time="2025-05-27T17:38:00.446966066Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:38:00.447370 containerd[1555]: time="2025-05-27T17:38:00.447305122Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:38:00.447509 containerd[1555]: time="2025-05-27T17:38:00.447376245Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:38:00.447964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:38:00.448803 containerd[1555]: time="2025-05-27T17:38:00.448690961Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:38:00.449020 containerd[1555]: time="2025-05-27T17:38:00.448844428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:38:00.449020 containerd[1555]: time="2025-05-27T17:38:00.448905894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:38:00.449020 containerd[1555]: time="2025-05-27T17:38:00.448921663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:38:00.449020 containerd[1555]: time="2025-05-27T17:38:00.448936251Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:38:00.449020 containerd[1555]: time="2025-05-27T17:38:00.448949576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:38:00.449020 containerd[1555]: time="2025-05-27T17:38:00.448963301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:38:00.449020 containerd[1555]: time="2025-05-27T17:38:00.448977047Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:38:00.449175 containerd[1555]: time="2025-05-27T17:38:00.449023484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:38:00.449175 containerd[1555]: time="2025-05-27T17:38:00.449049794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:38:00.449175 containerd[1555]: time="2025-05-27T17:38:00.449063369Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449804599Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449840136Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449852258Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449863810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449874180Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449886563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449904226Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449925396Z" level=info msg="runtime interface created" May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449932609Z" level=info msg="created NRI interface" May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449942878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449958027Z" level=info msg="Connect containerd service" May 27 17:38:00.450253 containerd[1555]: time="2025-05-27T17:38:00.449987512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:38:00.451305 containerd[1555]: time="2025-05-27T17:38:00.450947593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:38:00.458997 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:38:00.462712 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:38:00.465477 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:38:00.467327 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.551997888Z" level=info msg="Start subscribing containerd event" May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552105029Z" level=info msg="Start recovering state" May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552263376Z" level=info msg="Start event monitor" May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552499519Z" level=info msg="Start cni network conf syncer for default" May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552536668Z" level=info msg="Start streaming server" May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552553440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552565122Z" level=info msg="runtime interface starting up..." May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552578447Z" level=info msg="starting plugins..." May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552616047Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552254920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:38:00.553216 containerd[1555]: time="2025-05-27T17:38:00.552853723Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:38:00.553074 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:38:00.554016 containerd[1555]: time="2025-05-27T17:38:00.553971279Z" level=info msg="containerd successfully booted in 0.128221s" May 27 17:38:00.700543 tar[1546]: linux-amd64/README.md May 27 17:38:00.729098 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:38:01.116880 systemd-networkd[1486]: eth0: Gained IPv6LL May 27 17:38:01.120663 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:38:01.122978 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:38:01.126482 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 17:38:01.129632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:38:01.157186 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:38:01.188012 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:38:01.190102 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 17:38:01.190536 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 17:38:01.194234 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:38:01.919713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:01.921484 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:38:01.922852 systemd[1]: Startup finished in 3.303s (kernel) + 6.858s (initrd) + 4.592s (userspace) = 14.754s. May 27 17:38:01.960029 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:38:02.372820 kubelet[1664]: E0527 17:38:02.372685 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:38:02.376580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:38:02.376799 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:38:02.377195 systemd[1]: kubelet.service: Consumed 1.018s CPU time, 264.8M memory peak. May 27 17:38:03.824254 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:38:03.825740 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:32872.service - OpenSSH per-connection server daemon (10.0.0.1:32872). May 27 17:38:03.897377 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 32872 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:38:03.899795 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:03.913276 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:38:03.914699 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:38:03.922159 systemd-logind[1539]: New session 1 of user core. May 27 17:38:04.089322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:38:04.092587 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:38:04.107822 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:38:04.110116 systemd-logind[1539]: New session c1 of user core. May 27 17:38:04.263843 systemd[1682]: Queued start job for default target default.target. May 27 17:38:04.280816 systemd[1682]: Created slice app.slice - User Application Slice. May 27 17:38:04.280839 systemd[1682]: Reached target paths.target - Paths. May 27 17:38:04.280875 systemd[1682]: Reached target timers.target - Timers. May 27 17:38:04.282474 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:38:04.293308 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:38:04.293427 systemd[1682]: Reached target sockets.target - Sockets. May 27 17:38:04.293464 systemd[1682]: Reached target basic.target - Basic System. May 27 17:38:04.293500 systemd[1682]: Reached target default.target - Main User Target. May 27 17:38:04.293529 systemd[1682]: Startup finished in 177ms. May 27 17:38:04.294007 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:38:04.295856 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:38:04.357980 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:32888.service - OpenSSH per-connection server daemon (10.0.0.1:32888). May 27 17:38:04.413455 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 32888 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:38:04.414855 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:04.419204 systemd-logind[1539]: New session 2 of user core. May 27 17:38:04.428732 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:38:04.481633 sshd[1695]: Connection closed by 10.0.0.1 port 32888 May 27 17:38:04.481975 sshd-session[1693]: pam_unix(sshd:session): session closed for user core May 27 17:38:04.494256 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:32888.service: Deactivated successfully. May 27 17:38:04.495859 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:38:04.496661 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. May 27 17:38:04.499533 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:32896.service - OpenSSH per-connection server daemon (10.0.0.1:32896). May 27 17:38:04.500235 systemd-logind[1539]: Removed session 2. May 27 17:38:04.568856 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 32896 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:38:04.570580 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:04.575526 systemd-logind[1539]: New session 3 of user core. May 27 17:38:04.593001 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:38:04.643240 sshd[1704]: Connection closed by 10.0.0.1 port 32896 May 27 17:38:04.643488 sshd-session[1701]: pam_unix(sshd:session): session closed for user core May 27 17:38:04.659521 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:32896.service: Deactivated successfully. May 27 17:38:04.661615 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:38:04.662420 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. May 27 17:38:04.665501 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:32902.service - OpenSSH per-connection server daemon (10.0.0.1:32902). May 27 17:38:04.666128 systemd-logind[1539]: Removed session 3. May 27 17:38:04.726070 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 32902 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:38:04.727443 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:04.732666 systemd-logind[1539]: New session 4 of user core. May 27 17:38:04.746912 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:38:04.801546 sshd[1712]: Connection closed by 10.0.0.1 port 32902 May 27 17:38:04.802017 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 27 17:38:04.811355 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:32902.service: Deactivated successfully. May 27 17:38:04.813112 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:38:04.813896 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. May 27 17:38:04.816655 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:32912.service - OpenSSH per-connection server daemon (10.0.0.1:32912). May 27 17:38:04.817233 systemd-logind[1539]: Removed session 4. May 27 17:38:04.879551 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 32912 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:38:04.881562 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:04.887979 systemd-logind[1539]: New session 5 of user core. May 27 17:38:04.901794 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:38:04.965647 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:38:04.966101 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:38:04.990233 sudo[1721]: pam_unix(sudo:session): session closed for user root May 27 17:38:04.992371 sshd[1720]: Connection closed by 10.0.0.1 port 32912 May 27 17:38:04.992862 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 27 17:38:05.009472 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:32912.service: Deactivated successfully. May 27 17:38:05.011503 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:38:05.013047 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. May 27 17:38:05.017443 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:32922.service - OpenSSH per-connection server daemon (10.0.0.1:32922). May 27 17:38:05.018336 systemd-logind[1539]: Removed session 5. May 27 17:38:05.082357 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 32922 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:38:05.084356 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:05.089672 systemd-logind[1539]: New session 6 of user core. May 27 17:38:05.104715 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:38:05.160645 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:38:05.160971 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:38:05.169819 sudo[1731]: pam_unix(sudo:session): session closed for user root May 27 17:38:05.176867 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:38:05.177189 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:38:05.187861 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:38:05.239909 augenrules[1753]: No rules May 27 17:38:05.241758 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:38:05.242048 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:38:05.243484 sudo[1730]: pam_unix(sudo:session): session closed for user root May 27 17:38:05.245155 sshd[1729]: Connection closed by 10.0.0.1 port 32922 May 27 17:38:05.245534 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 27 17:38:05.258666 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:32922.service: Deactivated successfully. May 27 17:38:05.260722 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:38:05.261516 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. May 27 17:38:05.264835 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:32928.service - OpenSSH per-connection server daemon (10.0.0.1:32928). May 27 17:38:05.265477 systemd-logind[1539]: Removed session 6. May 27 17:38:05.315162 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 32928 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:38:05.316698 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:38:05.321859 systemd-logind[1539]: New session 7 of user core. May 27 17:38:05.333759 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:38:05.388582 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:38:05.388948 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:38:05.982284 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:38:05.999962 (dockerd)[1785]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:38:06.473577 dockerd[1785]: time="2025-05-27T17:38:06.473381396Z" level=info msg="Starting up" May 27 17:38:06.475921 dockerd[1785]: time="2025-05-27T17:38:06.475869852Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:38:07.487462 dockerd[1785]: time="2025-05-27T17:38:07.487382420Z" level=info msg="Loading containers: start." May 27 17:38:07.498616 kernel: Initializing XFRM netlink socket May 27 17:38:07.764983 systemd-networkd[1486]: docker0: Link UP May 27 17:38:07.772025 dockerd[1785]: time="2025-05-27T17:38:07.771964723Z" level=info msg="Loading containers: done." May 27 17:38:07.790079 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck294337599-merged.mount: Deactivated successfully. May 27 17:38:07.791648 dockerd[1785]: time="2025-05-27T17:38:07.791558508Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:38:07.791735 dockerd[1785]: time="2025-05-27T17:38:07.791675638Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:38:07.791829 dockerd[1785]: time="2025-05-27T17:38:07.791799540Z" level=info msg="Initializing buildkit" May 27 17:38:07.840718 dockerd[1785]: time="2025-05-27T17:38:07.840656334Z" level=info msg="Completed buildkit initialization" May 27 17:38:07.847551 dockerd[1785]: time="2025-05-27T17:38:07.847469501Z" level=info msg="Daemon has completed initialization" May 27 17:38:07.847697 dockerd[1785]: time="2025-05-27T17:38:07.847607741Z" level=info msg="API listen on /run/docker.sock" May 27 17:38:07.847841 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:38:08.779591 containerd[1555]: time="2025-05-27T17:38:08.778924903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 17:38:09.832824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341458781.mount: Deactivated successfully. May 27 17:38:11.313354 containerd[1555]: time="2025-05-27T17:38:11.313286225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:11.314005 containerd[1555]: time="2025-05-27T17:38:11.313970698Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 17:38:11.315163 containerd[1555]: time="2025-05-27T17:38:11.315129622Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:11.317684 containerd[1555]: time="2025-05-27T17:38:11.317654547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:11.318630 containerd[1555]: time="2025-05-27T17:38:11.318582237Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.53959639s" May 27 17:38:11.318667 containerd[1555]: time="2025-05-27T17:38:11.318635457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 17:38:11.319263 containerd[1555]: time="2025-05-27T17:38:11.319184957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 17:38:12.627402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:38:12.629467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:38:12.863351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:12.881924 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:38:13.129083 containerd[1555]: time="2025-05-27T17:38:13.129011708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:13.129933 containerd[1555]: time="2025-05-27T17:38:13.129898161Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 17:38:13.131466 containerd[1555]: time="2025-05-27T17:38:13.131399075Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:13.134065 containerd[1555]: time="2025-05-27T17:38:13.133968815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:13.134987 containerd[1555]: time="2025-05-27T17:38:13.134954343Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.815737707s" May 27 17:38:13.135047 containerd[1555]: time="2025-05-27T17:38:13.134989489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 17:38:13.135729 containerd[1555]: time="2025-05-27T17:38:13.135695272Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 17:38:13.157973 kubelet[2063]: E0527 17:38:13.157902 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:38:13.165474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:38:13.165745 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:38:13.166244 systemd[1]: kubelet.service: Consumed 277ms CPU time, 111.2M memory peak. May 27 17:38:14.897342 containerd[1555]: time="2025-05-27T17:38:14.897254915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:14.898848 containerd[1555]: time="2025-05-27T17:38:14.898789994Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 17:38:14.900352 containerd[1555]: time="2025-05-27T17:38:14.900283535Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:14.903421 containerd[1555]: time="2025-05-27T17:38:14.903331812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:14.904545 containerd[1555]: time="2025-05-27T17:38:14.904497458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.768774343s" May 27 17:38:14.904545 containerd[1555]: time="2025-05-27T17:38:14.904541801Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 17:38:14.905423 containerd[1555]: time="2025-05-27T17:38:14.905372148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 17:38:16.282155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022432542.mount: Deactivated successfully. May 27 17:38:17.331070 containerd[1555]: time="2025-05-27T17:38:17.330993744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:17.331834 containerd[1555]: time="2025-05-27T17:38:17.331801589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 17:38:17.333453 containerd[1555]: time="2025-05-27T17:38:17.333387243Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:17.335613 containerd[1555]: time="2025-05-27T17:38:17.335554457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:17.336234 containerd[1555]: time="2025-05-27T17:38:17.336179500Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.430751066s" May 27 17:38:17.336234 containerd[1555]: time="2025-05-27T17:38:17.336219905Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 17:38:17.336788 containerd[1555]: time="2025-05-27T17:38:17.336764907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:38:17.843930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331203675.mount: Deactivated successfully. May 27 17:38:19.285045 containerd[1555]: time="2025-05-27T17:38:19.284957264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:19.286129 containerd[1555]: time="2025-05-27T17:38:19.286055082Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 17:38:19.287821 containerd[1555]: time="2025-05-27T17:38:19.287764969Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:19.292453 containerd[1555]: time="2025-05-27T17:38:19.292372390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:19.293682 containerd[1555]: time="2025-05-27T17:38:19.293528418Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.956729686s" May 27 17:38:19.293682 containerd[1555]: time="2025-05-27T17:38:19.293589452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 17:38:19.294296 containerd[1555]: time="2025-05-27T17:38:19.294256142Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:38:19.886770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817380046.mount: Deactivated successfully. May 27 17:38:19.896298 containerd[1555]: time="2025-05-27T17:38:19.896218617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:38:19.897496 containerd[1555]: time="2025-05-27T17:38:19.897453182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 17:38:19.898833 containerd[1555]: time="2025-05-27T17:38:19.898791392Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:38:19.901037 containerd[1555]: time="2025-05-27T17:38:19.900972031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:38:19.901756 containerd[1555]: time="2025-05-27T17:38:19.901704555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.414288ms" May 27 17:38:19.901756 containerd[1555]: time="2025-05-27T17:38:19.901744450Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:38:19.902334 containerd[1555]: time="2025-05-27T17:38:19.902306083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 17:38:20.426764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971337252.mount: Deactivated successfully. May 27 17:38:23.399563 containerd[1555]: time="2025-05-27T17:38:23.399483258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:23.400136 containerd[1555]: time="2025-05-27T17:38:23.400089515Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 17:38:23.401585 containerd[1555]: time="2025-05-27T17:38:23.401535627Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:23.404430 containerd[1555]: time="2025-05-27T17:38:23.404386824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:23.405537 containerd[1555]: time="2025-05-27T17:38:23.405491065Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.503152641s" May 27 17:38:23.405568 containerd[1555]: time="2025-05-27T17:38:23.405538655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 17:38:23.416195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:38:23.418375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:38:23.653480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:23.670941 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:38:23.973660 kubelet[2211]: E0527 17:38:23.973441 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:38:23.977348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:38:23.977575 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:38:23.977952 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.7M memory peak. May 27 17:38:26.287938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:26.288156 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.7M memory peak. May 27 17:38:26.291114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:38:26.320879 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... May 27 17:38:26.320909 systemd[1]: Reloading... May 27 17:38:26.422679 zram_generator::config[2283]: No configuration found. May 27 17:38:26.822508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:38:26.945516 systemd[1]: Reloading finished in 624 ms. May 27 17:38:27.025645 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:38:27.025773 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:38:27.026089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:27.026134 systemd[1]: kubelet.service: Consumed 167ms CPU time, 98.2M memory peak. May 27 17:38:27.027988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:38:27.228955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:27.233275 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:38:27.289700 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:38:27.289700 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:38:27.289700 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:38:27.290132 kubelet[2332]: I0527 17:38:27.289747 2332 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:38:27.775102 kubelet[2332]: I0527 17:38:27.775053 2332 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:38:27.775102 kubelet[2332]: I0527 17:38:27.775094 2332 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:38:27.775486 kubelet[2332]: I0527 17:38:27.775465 2332 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:38:27.805339 kubelet[2332]: E0527 17:38:27.805271 2332 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 27 17:38:27.805975 kubelet[2332]: I0527 17:38:27.805941 2332 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:38:27.816163 kubelet[2332]: I0527 17:38:27.816124 2332 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:38:27.823025 kubelet[2332]: I0527 17:38:27.822987 2332 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:38:27.824979 kubelet[2332]: I0527 17:38:27.824922 2332 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:38:27.825219 kubelet[2332]: I0527 17:38:27.824970 2332 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:38:27.825357 kubelet[2332]: I0527 17:38:27.825228 2332 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:38:27.825357 kubelet[2332]: I0527 17:38:27.825241 2332 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:38:27.825455 kubelet[2332]: I0527 17:38:27.825433 2332 state_mem.go:36] "Initialized new in-memory state store" May 27 17:38:27.830088 kubelet[2332]: I0527 17:38:27.830061 2332 kubelet.go:446] "Attempting to sync node with API server" May 27 17:38:27.831487 kubelet[2332]: I0527 17:38:27.831432 2332 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:38:27.831487 kubelet[2332]: I0527 17:38:27.831476 2332 kubelet.go:352] "Adding apiserver pod source" May 27 17:38:27.831487 kubelet[2332]: I0527 17:38:27.831492 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:38:27.833627 kubelet[2332]: W0527 17:38:27.833550 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 27 17:38:27.833706 kubelet[2332]: E0527 17:38:27.833667 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 27 17:38:27.835147 kubelet[2332]: W0527 17:38:27.834070 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 27 17:38:27.835147 kubelet[2332]: E0527 17:38:27.834118 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 27 17:38:27.835533 kubelet[2332]: I0527 17:38:27.835515 2332 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:38:27.836426 kubelet[2332]: I0527 17:38:27.836377 2332 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:38:27.837289 kubelet[2332]: W0527 17:38:27.837264 2332 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:38:27.842658 kubelet[2332]: I0527 17:38:27.842635 2332 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:38:27.842769 kubelet[2332]: I0527 17:38:27.842675 2332 server.go:1287] "Started kubelet" May 27 17:38:27.843076 kubelet[2332]: I0527 17:38:27.843043 2332 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:38:27.844308 kubelet[2332]: I0527 17:38:27.844031 2332 server.go:479] "Adding debug handlers to kubelet server" May 27 17:38:27.844410 kubelet[2332]: I0527 17:38:27.844379 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:38:27.844518 kubelet[2332]: I0527 17:38:27.844442 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:38:27.845221 kubelet[2332]: I0527 17:38:27.844978 2332 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:38:27.845640 kubelet[2332]: I0527 17:38:27.845614 2332 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:38:27.847134 kubelet[2332]: E0527 17:38:27.846453 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:38:27.847134 kubelet[2332]: I0527 17:38:27.846494 2332 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:38:27.847134 kubelet[2332]: I0527 17:38:27.846654 2332 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:38:27.847134 kubelet[2332]: I0527 17:38:27.846706 2332 reconciler.go:26] "Reconciler: start to sync state" May 27 17:38:27.847134 kubelet[2332]: W0527 17:38:27.846963 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 27 17:38:27.847134 kubelet[2332]: E0527 17:38:27.847002 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 27 17:38:27.847346 kubelet[2332]: E0527 17:38:27.847303 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" May 27 17:38:27.848305 kubelet[2332]: E0527 17:38:27.848212 2332 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:38:27.849408 kubelet[2332]: I0527 17:38:27.849349 2332 factory.go:221] Registration of the containerd container factory successfully May 27 17:38:27.849408 kubelet[2332]: I0527 17:38:27.849365 2332 factory.go:221] Registration of the systemd container factory successfully May 27 17:38:27.849502 kubelet[2332]: I0527 17:38:27.849449 2332 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:38:27.851518 kubelet[2332]: E0527 17:38:27.848945 2332 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184372fc31cbce8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:38:27.842649742 +0000 UTC m=+0.605506743,LastTimestamp:2025-05-27 17:38:27.842649742 +0000 UTC m=+0.605506743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:38:27.862803 kubelet[2332]: I0527 17:38:27.862740 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:38:27.864215 kubelet[2332]: I0527 17:38:27.864177 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:38:27.864215 kubelet[2332]: I0527 17:38:27.864200 2332 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:38:27.864215 kubelet[2332]: I0527 17:38:27.864222 2332 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:38:27.864378 kubelet[2332]: I0527 17:38:27.864230 2332 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:38:27.864378 kubelet[2332]: E0527 17:38:27.864276 2332 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:38:27.867947 kubelet[2332]: W0527 17:38:27.867909 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 27 17:38:27.868034 kubelet[2332]: E0527 17:38:27.867955 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 27 17:38:27.868066 kubelet[2332]: I0527 17:38:27.868033 2332 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:38:27.868066 kubelet[2332]: I0527 17:38:27.868043 2332 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:38:27.868066 kubelet[2332]: I0527 17:38:27.868064 2332 state_mem.go:36] "Initialized new in-memory state store" May 27 17:38:27.873319 kubelet[2332]: I0527 17:38:27.873302 2332 policy_none.go:49] "None policy: Start" May 27 17:38:27.873319 kubelet[2332]: I0527 17:38:27.873318 2332 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:38:27.873436 kubelet[2332]: I0527 17:38:27.873332 2332 state_mem.go:35] "Initializing new in-memory state store" May 27 17:38:27.880295 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:38:27.895531 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:38:27.899319 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:38:27.906622 kubelet[2332]: I0527 17:38:27.906575 2332 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:38:27.906873 kubelet[2332]: I0527 17:38:27.906843 2332 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:38:27.906925 kubelet[2332]: I0527 17:38:27.906859 2332 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:38:27.907124 kubelet[2332]: I0527 17:38:27.907098 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:38:27.908049 kubelet[2332]: E0527 17:38:27.908000 2332 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:38:27.908097 kubelet[2332]: E0527 17:38:27.908053 2332 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 17:38:27.976430 systemd[1]: Created slice kubepods-burstable-pod64dbcb8fc4133d38fbf58fbae66908be.slice - libcontainer container kubepods-burstable-pod64dbcb8fc4133d38fbf58fbae66908be.slice. May 27 17:38:28.005484 kubelet[2332]: E0527 17:38:28.005438 2332 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:38:28.008989 kubelet[2332]: I0527 17:38:28.008963 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:38:28.009409 kubelet[2332]: E0527 17:38:28.009369 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 27 17:38:28.010029 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 17:38:28.012231 kubelet[2332]: E0527 17:38:28.012206 2332 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:38:28.014158 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 17:38:28.016120 kubelet[2332]: E0527 17:38:28.016094 2332 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:38:28.048570 kubelet[2332]: I0527 17:38:28.048347 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64dbcb8fc4133d38fbf58fbae66908be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"64dbcb8fc4133d38fbf58fbae66908be\") " pod="kube-system/kube-apiserver-localhost" May 27 17:38:28.048570 kubelet[2332]: I0527 17:38:28.048400 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64dbcb8fc4133d38fbf58fbae66908be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"64dbcb8fc4133d38fbf58fbae66908be\") " pod="kube-system/kube-apiserver-localhost" May 27 17:38:28.048570 kubelet[2332]: I0527 17:38:28.048431 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:28.048570 kubelet[2332]: I0527 17:38:28.048452 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64dbcb8fc4133d38fbf58fbae66908be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"64dbcb8fc4133d38fbf58fbae66908be\") " pod="kube-system/kube-apiserver-localhost" May 27 17:38:28.048570 kubelet[2332]: I0527 17:38:28.048469 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:28.048851 kubelet[2332]: I0527 17:38:28.048483 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:28.048851 kubelet[2332]: I0527 17:38:28.048497 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:28.048851 kubelet[2332]: I0527 17:38:28.048511 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:28.048851 kubelet[2332]: I0527 17:38:28.048525 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 17:38:28.048851 kubelet[2332]: E0527 17:38:28.048781 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" May 27 17:38:28.210833 kubelet[2332]: I0527 17:38:28.210782 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:38:28.211256 kubelet[2332]: E0527 17:38:28.211191 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 27 17:38:28.306688 kubelet[2332]: E0527 17:38:28.306509 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:28.307448 containerd[1555]: time="2025-05-27T17:38:28.307408841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:64dbcb8fc4133d38fbf58fbae66908be,Namespace:kube-system,Attempt:0,}" May 27 17:38:28.312719 kubelet[2332]: E0527 17:38:28.312671 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:28.313194 containerd[1555]: time="2025-05-27T17:38:28.313146100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 17:38:28.317535 kubelet[2332]: E0527 17:38:28.317504 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:28.318162 containerd[1555]: time="2025-05-27T17:38:28.318111302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 17:38:28.450162 kubelet[2332]: E0527 17:38:28.450055 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" May 27 17:38:28.498023 containerd[1555]: time="2025-05-27T17:38:28.497956265Z" level=info msg="connecting to shim 817bd7e890c2f9187c1a0fe10ffea2748a88e9f061db7a86919a69f37c0540fc" address="unix:///run/containerd/s/afb852462456455e61a983eb3fd1d000716c6af15dd2b9f3e74cac8386f61c79" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:28.499250 containerd[1555]: time="2025-05-27T17:38:28.498677618Z" level=info msg="connecting to shim 539cd27dab3a02c2e75096ecc317e9ad6baf6c39ef8ea464e015b6f62fe7f3a3" address="unix:///run/containerd/s/d2e118a20fadd48573c9e9cb9631b68bbe7c169666b8010f25fad61aa623c722" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:28.516987 containerd[1555]: time="2025-05-27T17:38:28.516917023Z" level=info msg="connecting to shim ef88ad2fb89f1370e10d2057046e70e24d855b4c0460468bc3002f113449b361" address="unix:///run/containerd/s/fc8dff2007c1caa1773601b968bb0810fce58d36ea5dc03244715d90e6c1b758" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:28.537048 systemd[1]: Started cri-containerd-817bd7e890c2f9187c1a0fe10ffea2748a88e9f061db7a86919a69f37c0540fc.scope - libcontainer container 817bd7e890c2f9187c1a0fe10ffea2748a88e9f061db7a86919a69f37c0540fc. May 27 17:38:28.552146 systemd[1]: Started cri-containerd-539cd27dab3a02c2e75096ecc317e9ad6baf6c39ef8ea464e015b6f62fe7f3a3.scope - libcontainer container 539cd27dab3a02c2e75096ecc317e9ad6baf6c39ef8ea464e015b6f62fe7f3a3. May 27 17:38:28.567837 systemd[1]: Started cri-containerd-ef88ad2fb89f1370e10d2057046e70e24d855b4c0460468bc3002f113449b361.scope - libcontainer container ef88ad2fb89f1370e10d2057046e70e24d855b4c0460468bc3002f113449b361. May 27 17:38:28.612957 kubelet[2332]: I0527 17:38:28.612919 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:38:28.614190 kubelet[2332]: E0527 17:38:28.614143 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 27 17:38:28.642947 containerd[1555]: time="2025-05-27T17:38:28.642886531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"539cd27dab3a02c2e75096ecc317e9ad6baf6c39ef8ea464e015b6f62fe7f3a3\"" May 27 17:38:28.645721 kubelet[2332]: E0527 17:38:28.645638 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:28.649266 containerd[1555]: time="2025-05-27T17:38:28.649217895Z" level=info msg="CreateContainer within sandbox \"539cd27dab3a02c2e75096ecc317e9ad6baf6c39ef8ea464e015b6f62fe7f3a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:38:28.650056 containerd[1555]: time="2025-05-27T17:38:28.650025319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:64dbcb8fc4133d38fbf58fbae66908be,Namespace:kube-system,Attempt:0,} returns sandbox id \"817bd7e890c2f9187c1a0fe10ffea2748a88e9f061db7a86919a69f37c0540fc\"" May 27 17:38:28.650847 kubelet[2332]: E0527 17:38:28.650809 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:28.654233 containerd[1555]: time="2025-05-27T17:38:28.654193987Z" level=info msg="CreateContainer within sandbox \"817bd7e890c2f9187c1a0fe10ffea2748a88e9f061db7a86919a69f37c0540fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:38:28.664010 containerd[1555]: time="2025-05-27T17:38:28.663930797Z" level=info msg="Container 1233a3e846fe26fedf1b803b0355607d200c22d38c5c35d2f8367486798e55bd: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:28.665614 containerd[1555]: time="2025-05-27T17:38:28.665546877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef88ad2fb89f1370e10d2057046e70e24d855b4c0460468bc3002f113449b361\"" May 27 17:38:28.666450 kubelet[2332]: E0527 17:38:28.666424 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:28.668662 containerd[1555]: time="2025-05-27T17:38:28.668621273Z" level=info msg="CreateContainer within sandbox \"ef88ad2fb89f1370e10d2057046e70e24d855b4c0460468bc3002f113449b361\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:38:28.669907 containerd[1555]: time="2025-05-27T17:38:28.669868312Z" level=info msg="Container daae049e9efd1508865a5268c8591f6dc16d4f47a1d78836815597d952c91ed7: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:28.684293 containerd[1555]: time="2025-05-27T17:38:28.684239493Z" level=info msg="CreateContainer within sandbox \"539cd27dab3a02c2e75096ecc317e9ad6baf6c39ef8ea464e015b6f62fe7f3a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1233a3e846fe26fedf1b803b0355607d200c22d38c5c35d2f8367486798e55bd\"" May 27 17:38:28.685357 containerd[1555]: time="2025-05-27T17:38:28.685276117Z" level=info msg="StartContainer for \"1233a3e846fe26fedf1b803b0355607d200c22d38c5c35d2f8367486798e55bd\"" May 27 17:38:28.687984 containerd[1555]: time="2025-05-27T17:38:28.687944040Z" level=info msg="connecting to shim 1233a3e846fe26fedf1b803b0355607d200c22d38c5c35d2f8367486798e55bd" address="unix:///run/containerd/s/d2e118a20fadd48573c9e9cb9631b68bbe7c169666b8010f25fad61aa623c722" protocol=ttrpc version=3 May 27 17:38:28.690701 containerd[1555]: time="2025-05-27T17:38:28.690591455Z" level=info msg="Container 9f5e79b87a45a042a6d10a6369d0495ce9c2f91549f7830d91ac302f1fc3e5dc: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:28.692094 containerd[1555]: time="2025-05-27T17:38:28.692025765Z" level=info msg="CreateContainer within sandbox \"817bd7e890c2f9187c1a0fe10ffea2748a88e9f061db7a86919a69f37c0540fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"daae049e9efd1508865a5268c8591f6dc16d4f47a1d78836815597d952c91ed7\"" May 27 17:38:28.692818 containerd[1555]: time="2025-05-27T17:38:28.692776192Z" level=info msg="StartContainer for \"daae049e9efd1508865a5268c8591f6dc16d4f47a1d78836815597d952c91ed7\"" May 27 17:38:28.695576 containerd[1555]: time="2025-05-27T17:38:28.695248729Z" level=info msg="connecting to shim daae049e9efd1508865a5268c8591f6dc16d4f47a1d78836815597d952c91ed7" address="unix:///run/containerd/s/afb852462456455e61a983eb3fd1d000716c6af15dd2b9f3e74cac8386f61c79" protocol=ttrpc version=3 May 27 17:38:28.700884 containerd[1555]: time="2025-05-27T17:38:28.700809758Z" level=info msg="CreateContainer within sandbox \"ef88ad2fb89f1370e10d2057046e70e24d855b4c0460468bc3002f113449b361\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f5e79b87a45a042a6d10a6369d0495ce9c2f91549f7830d91ac302f1fc3e5dc\"" May 27 17:38:28.702050 containerd[1555]: time="2025-05-27T17:38:28.702007053Z" level=info msg="StartContainer for \"9f5e79b87a45a042a6d10a6369d0495ce9c2f91549f7830d91ac302f1fc3e5dc\"" May 27 17:38:28.704029 containerd[1555]: time="2025-05-27T17:38:28.703985043Z" level=info msg="connecting to shim 9f5e79b87a45a042a6d10a6369d0495ce9c2f91549f7830d91ac302f1fc3e5dc" address="unix:///run/containerd/s/fc8dff2007c1caa1773601b968bb0810fce58d36ea5dc03244715d90e6c1b758" protocol=ttrpc version=3 May 27 17:38:28.735878 systemd[1]: Started cri-containerd-1233a3e846fe26fedf1b803b0355607d200c22d38c5c35d2f8367486798e55bd.scope - libcontainer container 1233a3e846fe26fedf1b803b0355607d200c22d38c5c35d2f8367486798e55bd. May 27 17:38:28.747878 systemd[1]: Started cri-containerd-9f5e79b87a45a042a6d10a6369d0495ce9c2f91549f7830d91ac302f1fc3e5dc.scope - libcontainer container 9f5e79b87a45a042a6d10a6369d0495ce9c2f91549f7830d91ac302f1fc3e5dc. May 27 17:38:28.752829 systemd[1]: Started cri-containerd-daae049e9efd1508865a5268c8591f6dc16d4f47a1d78836815597d952c91ed7.scope - libcontainer container daae049e9efd1508865a5268c8591f6dc16d4f47a1d78836815597d952c91ed7. May 27 17:38:28.824137 kubelet[2332]: W0527 17:38:28.823738 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 27 17:38:28.824137 kubelet[2332]: E0527 17:38:28.823828 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 27 17:38:28.836717 containerd[1555]: time="2025-05-27T17:38:28.836658062Z" level=info msg="StartContainer for \"daae049e9efd1508865a5268c8591f6dc16d4f47a1d78836815597d952c91ed7\" returns successfully" May 27 17:38:28.876277 containerd[1555]: time="2025-05-27T17:38:28.876225866Z" level=info msg="StartContainer for \"9f5e79b87a45a042a6d10a6369d0495ce9c2f91549f7830d91ac302f1fc3e5dc\" returns successfully" May 27 17:38:28.878157 containerd[1555]: time="2025-05-27T17:38:28.878119527Z" level=info msg="StartContainer for \"1233a3e846fe26fedf1b803b0355607d200c22d38c5c35d2f8367486798e55bd\" returns successfully" May 27 17:38:28.884071 kubelet[2332]: E0527 17:38:28.884037 2332 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:38:28.884197 kubelet[2332]: E0527 17:38:28.884185 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:29.416622 kubelet[2332]: I0527 17:38:29.416558 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:38:29.887309 kubelet[2332]: E0527 17:38:29.887134 2332 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:38:29.887565 kubelet[2332]: E0527 17:38:29.887327 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:29.888217 kubelet[2332]: E0527 17:38:29.887771 2332 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:38:29.888217 kubelet[2332]: E0527 17:38:29.887953 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:29.888391 kubelet[2332]: E0527 17:38:29.888268 2332 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:38:29.888483 kubelet[2332]: E0527 17:38:29.888451 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:30.396887 kubelet[2332]: E0527 17:38:30.396832 2332 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 17:38:30.498909 kubelet[2332]: I0527 17:38:30.498825 2332 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:38:30.498909 kubelet[2332]: E0527 17:38:30.498863 2332 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 17:38:30.547423 kubelet[2332]: I0527 17:38:30.547359 2332 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:38:30.552452 kubelet[2332]: E0527 17:38:30.552395 2332 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:38:30.552452 kubelet[2332]: I0527 17:38:30.552432 2332 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:38:30.554301 kubelet[2332]: E0527 17:38:30.554274 2332 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 17:38:30.554301 kubelet[2332]: I0527 17:38:30.554298 2332 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:38:30.555837 kubelet[2332]: E0527 17:38:30.555814 2332 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 17:38:30.834520 kubelet[2332]: I0527 17:38:30.834397 2332 apiserver.go:52] "Watching apiserver" May 27 17:38:30.847281 kubelet[2332]: I0527 17:38:30.847235 2332 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:38:30.886864 kubelet[2332]: I0527 17:38:30.886837 2332 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:38:30.888689 kubelet[2332]: E0527 17:38:30.888665 2332 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:38:30.888893 kubelet[2332]: E0527 17:38:30.888862 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:31.143797 kubelet[2332]: I0527 17:38:31.143679 2332 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:38:31.146259 kubelet[2332]: E0527 17:38:31.146217 2332 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 17:38:31.146432 kubelet[2332]: E0527 17:38:31.146403 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:33.217104 systemd[1]: Reload requested from client PID 2608 ('systemctl') (unit session-7.scope)... May 27 17:38:33.217124 systemd[1]: Reloading... May 27 17:38:33.371651 zram_generator::config[2651]: No configuration found. May 27 17:38:33.475492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:38:33.628230 systemd[1]: Reloading finished in 410 ms. May 27 17:38:33.660830 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:38:33.684013 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:38:33.684312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:33.684367 systemd[1]: kubelet.service: Consumed 1.134s CPU time, 132.5M memory peak. May 27 17:38:33.686340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:38:33.916183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:38:33.925983 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:38:33.979977 kubelet[2696]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:38:33.979977 kubelet[2696]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:38:33.979977 kubelet[2696]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:38:33.980432 kubelet[2696]: I0527 17:38:33.980032 2696 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:38:33.988538 kubelet[2696]: I0527 17:38:33.988497 2696 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:38:33.988538 kubelet[2696]: I0527 17:38:33.988528 2696 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:38:33.988830 kubelet[2696]: I0527 17:38:33.988803 2696 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:38:33.990818 kubelet[2696]: I0527 17:38:33.990786 2696 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:38:33.993089 kubelet[2696]: I0527 17:38:33.993035 2696 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:38:34.018024 kubelet[2696]: I0527 17:38:34.017992 2696 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:38:34.022766 kubelet[2696]: I0527 17:38:34.022745 2696 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:38:34.022998 kubelet[2696]: I0527 17:38:34.022945 2696 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:38:34.023135 kubelet[2696]: I0527 17:38:34.022983 2696 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:38:34.023230 kubelet[2696]: I0527 17:38:34.023139 2696 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:38:34.023230 kubelet[2696]: I0527 17:38:34.023148 2696 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:38:34.023230 kubelet[2696]: I0527 17:38:34.023194 2696 state_mem.go:36] "Initialized new in-memory state store" May 27 17:38:34.023364 kubelet[2696]: I0527 17:38:34.023347 2696 kubelet.go:446] "Attempting to sync node with API server" May 27 17:38:34.023401 kubelet[2696]: I0527 17:38:34.023385 2696 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:38:34.023438 kubelet[2696]: I0527 17:38:34.023409 2696 kubelet.go:352] "Adding apiserver pod source" May 27 17:38:34.023438 kubelet[2696]: I0527 17:38:34.023419 2696 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:38:34.024664 kubelet[2696]: I0527 17:38:34.024638 2696 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:38:34.025050 kubelet[2696]: I0527 17:38:34.025027 2696 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:38:34.025464 kubelet[2696]: I0527 17:38:34.025438 2696 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:38:34.025464 kubelet[2696]: I0527 17:38:34.025467 2696 server.go:1287] "Started kubelet" May 27 17:38:34.026419 kubelet[2696]: I0527 17:38:34.026358 2696 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:38:34.026974 kubelet[2696]: I0527 17:38:34.026946 2696 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:38:34.027034 kubelet[2696]: I0527 17:38:34.027015 2696 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:38:34.027263 kubelet[2696]: I0527 17:38:34.027236 2696 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:38:34.028014 kubelet[2696]: I0527 17:38:34.027978 2696 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:38:34.031794 kubelet[2696]: I0527 17:38:34.031737 2696 server.go:479] "Adding debug handlers to kubelet server" May 27 17:38:34.038107 kubelet[2696]: E0527 17:38:34.037905 2696 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:38:34.038107 kubelet[2696]: I0527 17:38:34.037967 2696 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:38:34.038275 kubelet[2696]: I0527 17:38:34.038246 2696 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:38:34.038481 kubelet[2696]: I0527 17:38:34.038452 2696 reconciler.go:26] "Reconciler: start to sync state" May 27 17:38:34.039743 kubelet[2696]: I0527 17:38:34.039263 2696 factory.go:221] Registration of the systemd container factory successfully May 27 17:38:34.039743 kubelet[2696]: I0527 17:38:34.039385 2696 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:38:34.039743 kubelet[2696]: E0527 17:38:34.039610 2696 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:38:34.040875 kubelet[2696]: I0527 17:38:34.040858 2696 factory.go:221] Registration of the containerd container factory successfully May 27 17:38:34.046355 kubelet[2696]: I0527 17:38:34.045951 2696 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:38:34.049648 kubelet[2696]: I0527 17:38:34.048993 2696 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:38:34.049648 kubelet[2696]: I0527 17:38:34.049077 2696 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:38:34.049648 kubelet[2696]: I0527 17:38:34.049113 2696 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:38:34.049648 kubelet[2696]: I0527 17:38:34.049124 2696 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:38:34.049648 kubelet[2696]: E0527 17:38:34.049191 2696 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:38:34.128400 kubelet[2696]: I0527 17:38:34.128359 2696 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:38:34.128400 kubelet[2696]: I0527 17:38:34.128387 2696 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:38:34.128653 kubelet[2696]: I0527 17:38:34.128417 2696 state_mem.go:36] "Initialized new in-memory state store" May 27 17:38:34.128675 kubelet[2696]: I0527 17:38:34.128657 2696 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:38:34.128735 kubelet[2696]: I0527 17:38:34.128671 2696 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:38:34.128735 kubelet[2696]: I0527 17:38:34.128726 2696 policy_none.go:49] "None policy: Start" May 27 17:38:34.128735 kubelet[2696]: I0527 17:38:34.128737 2696 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:38:34.128848 kubelet[2696]: I0527 17:38:34.128749 2696 state_mem.go:35] "Initializing new in-memory state store" May 27 17:38:34.128891 kubelet[2696]: I0527 17:38:34.128866 2696 state_mem.go:75] "Updated machine memory state" May 27 17:38:34.134689 kubelet[2696]: I0527 17:38:34.134659 2696 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:38:34.134889 kubelet[2696]: I0527 17:38:34.134868 2696 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:38:34.135057 kubelet[2696]: I0527 17:38:34.134896 2696 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:38:34.135350 kubelet[2696]: I0527 17:38:34.135287 2696 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:38:34.136822 kubelet[2696]: E0527 17:38:34.136778 2696 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:38:34.152103 kubelet[2696]: I0527 17:38:34.151264 2696 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:38:34.152103 kubelet[2696]: I0527 17:38:34.151391 2696 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:38:34.152103 kubelet[2696]: I0527 17:38:34.151281 2696 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:38:34.240783 kubelet[2696]: I0527 17:38:34.240736 2696 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:38:34.248610 kubelet[2696]: I0527 17:38:34.248558 2696 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 17:38:34.248740 kubelet[2696]: I0527 17:38:34.248666 2696 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:38:34.339937 kubelet[2696]: I0527 17:38:34.339867 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:34.339937 kubelet[2696]: I0527 17:38:34.339927 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:34.340154 kubelet[2696]: I0527 17:38:34.339971 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:34.340154 kubelet[2696]: I0527 17:38:34.340021 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64dbcb8fc4133d38fbf58fbae66908be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"64dbcb8fc4133d38fbf58fbae66908be\") " pod="kube-system/kube-apiserver-localhost" May 27 17:38:34.340154 kubelet[2696]: I0527 17:38:34.340062 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:34.340154 kubelet[2696]: I0527 17:38:34.340088 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:38:34.340154 kubelet[2696]: I0527 17:38:34.340115 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 17:38:34.340289 kubelet[2696]: I0527 17:38:34.340139 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64dbcb8fc4133d38fbf58fbae66908be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"64dbcb8fc4133d38fbf58fbae66908be\") " pod="kube-system/kube-apiserver-localhost" May 27 17:38:34.340289 kubelet[2696]: I0527 17:38:34.340161 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64dbcb8fc4133d38fbf58fbae66908be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"64dbcb8fc4133d38fbf58fbae66908be\") " pod="kube-system/kube-apiserver-localhost" May 27 17:38:34.460944 kubelet[2696]: E0527 17:38:34.460901 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:34.461741 kubelet[2696]: E0527 17:38:34.461696 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:34.461796 kubelet[2696]: E0527 17:38:34.461770 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:35.024397 kubelet[2696]: I0527 17:38:35.024329 2696 apiserver.go:52] "Watching apiserver" May 27 17:38:35.039290 kubelet[2696]: I0527 17:38:35.039240 2696 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:38:35.059236 kubelet[2696]: I0527 17:38:35.059121 2696 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:38:35.059236 kubelet[2696]: E0527 17:38:35.059135 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:35.059504 kubelet[2696]: E0527 17:38:35.059273 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:35.310645 kubelet[2696]: E0527 17:38:35.308835 2696 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:38:35.310645 kubelet[2696]: E0527 17:38:35.309092 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:35.310645 kubelet[2696]: I0527 17:38:35.309471 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.309442754 podStartE2EDuration="1.309442754s" podCreationTimestamp="2025-05-27 17:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:38:35.308400736 +0000 UTC m=+1.375187381" watchObservedRunningTime="2025-05-27 17:38:35.309442754 +0000 UTC m=+1.376229389" May 27 17:38:35.439778 kubelet[2696]: I0527 17:38:35.439693 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.439673945 podStartE2EDuration="1.439673945s" podCreationTimestamp="2025-05-27 17:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:38:35.369115087 +0000 UTC m=+1.435901732" watchObservedRunningTime="2025-05-27 17:38:35.439673945 +0000 UTC m=+1.506460590" May 27 17:38:35.439969 kubelet[2696]: I0527 17:38:35.439803 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.439794786 podStartE2EDuration="1.439794786s" podCreationTimestamp="2025-05-27 17:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:38:35.43930654 +0000 UTC m=+1.506093215" watchObservedRunningTime="2025-05-27 17:38:35.439794786 +0000 UTC m=+1.506581431" May 27 17:38:36.060206 kubelet[2696]: E0527 17:38:36.060156 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:36.060750 kubelet[2696]: E0527 17:38:36.060445 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:37.062054 kubelet[2696]: E0527 17:38:37.061962 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:39.198202 kubelet[2696]: I0527 17:38:39.198159 2696 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:38:39.198703 kubelet[2696]: I0527 17:38:39.198607 2696 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:38:39.198731 containerd[1555]: time="2025-05-27T17:38:39.198432613Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:38:39.641571 systemd[1]: Created slice kubepods-besteffort-podacbc83d5_0cea_405f_9651_72ae550b73b5.slice - libcontainer container kubepods-besteffort-podacbc83d5_0cea_405f_9651_72ae550b73b5.slice. May 27 17:38:39.768366 kubelet[2696]: I0527 17:38:39.768305 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acbc83d5-0cea-405f-9651-72ae550b73b5-lib-modules\") pod \"kube-proxy-p24wg\" (UID: \"acbc83d5-0cea-405f-9651-72ae550b73b5\") " pod="kube-system/kube-proxy-p24wg" May 27 17:38:39.768539 kubelet[2696]: I0527 17:38:39.768371 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7km\" (UniqueName: \"kubernetes.io/projected/acbc83d5-0cea-405f-9651-72ae550b73b5-kube-api-access-bf7km\") pod \"kube-proxy-p24wg\" (UID: \"acbc83d5-0cea-405f-9651-72ae550b73b5\") " pod="kube-system/kube-proxy-p24wg" May 27 17:38:39.768539 kubelet[2696]: I0527 17:38:39.768452 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acbc83d5-0cea-405f-9651-72ae550b73b5-xtables-lock\") pod \"kube-proxy-p24wg\" (UID: \"acbc83d5-0cea-405f-9651-72ae550b73b5\") " pod="kube-system/kube-proxy-p24wg" May 27 17:38:39.768539 kubelet[2696]: I0527 17:38:39.768488 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/acbc83d5-0cea-405f-9651-72ae550b73b5-kube-proxy\") pod \"kube-proxy-p24wg\" (UID: \"acbc83d5-0cea-405f-9651-72ae550b73b5\") " pod="kube-system/kube-proxy-p24wg" May 27 17:38:39.948990 kubelet[2696]: E0527 17:38:39.948901 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:39.949810 containerd[1555]: time="2025-05-27T17:38:39.949725701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p24wg,Uid:acbc83d5-0cea-405f-9651-72ae550b73b5,Namespace:kube-system,Attempt:0,}" May 27 17:38:39.973004 containerd[1555]: time="2025-05-27T17:38:39.972953087Z" level=info msg="connecting to shim 168ba263d4bf14a1b3791e50150e98b7fdc87d3168fe8c58d644d8fc00412191" address="unix:///run/containerd/s/8bfa150fe9c6296c28f87f3b75e00580ea17a92963a417f6ac9715bc78d0cd1e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:40.006169 systemd[1]: Started cri-containerd-168ba263d4bf14a1b3791e50150e98b7fdc87d3168fe8c58d644d8fc00412191.scope - libcontainer container 168ba263d4bf14a1b3791e50150e98b7fdc87d3168fe8c58d644d8fc00412191. May 27 17:38:40.037363 containerd[1555]: time="2025-05-27T17:38:40.037323208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p24wg,Uid:acbc83d5-0cea-405f-9651-72ae550b73b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"168ba263d4bf14a1b3791e50150e98b7fdc87d3168fe8c58d644d8fc00412191\"" May 27 17:38:40.038177 kubelet[2696]: E0527 17:38:40.038148 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:40.041521 containerd[1555]: time="2025-05-27T17:38:40.041476273Z" level=info msg="CreateContainer within sandbox \"168ba263d4bf14a1b3791e50150e98b7fdc87d3168fe8c58d644d8fc00412191\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:38:40.054188 containerd[1555]: time="2025-05-27T17:38:40.054134398Z" level=info msg="Container 0e60ecb926af5f7a27fd2b8c3f2b789fcb56467732c29c6dc71e19d18fe6f8d6: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:40.063449 containerd[1555]: time="2025-05-27T17:38:40.063399576Z" level=info msg="CreateContainer within sandbox \"168ba263d4bf14a1b3791e50150e98b7fdc87d3168fe8c58d644d8fc00412191\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e60ecb926af5f7a27fd2b8c3f2b789fcb56467732c29c6dc71e19d18fe6f8d6\"" May 27 17:38:40.065128 containerd[1555]: time="2025-05-27T17:38:40.064165827Z" level=info msg="StartContainer for \"0e60ecb926af5f7a27fd2b8c3f2b789fcb56467732c29c6dc71e19d18fe6f8d6\"" May 27 17:38:40.067011 containerd[1555]: time="2025-05-27T17:38:40.066964020Z" level=info msg="connecting to shim 0e60ecb926af5f7a27fd2b8c3f2b789fcb56467732c29c6dc71e19d18fe6f8d6" address="unix:///run/containerd/s/8bfa150fe9c6296c28f87f3b75e00580ea17a92963a417f6ac9715bc78d0cd1e" protocol=ttrpc version=3 May 27 17:38:40.100918 systemd[1]: Started cri-containerd-0e60ecb926af5f7a27fd2b8c3f2b789fcb56467732c29c6dc71e19d18fe6f8d6.scope - libcontainer container 0e60ecb926af5f7a27fd2b8c3f2b789fcb56467732c29c6dc71e19d18fe6f8d6. May 27 17:38:40.151491 containerd[1555]: time="2025-05-27T17:38:40.151424432Z" level=info msg="StartContainer for \"0e60ecb926af5f7a27fd2b8c3f2b789fcb56467732c29c6dc71e19d18fe6f8d6\" returns successfully" May 27 17:38:40.227646 systemd[1]: Created slice kubepods-besteffort-pod122dd571_2b0a_4dcb_999c_239eab4ad99d.slice - libcontainer container kubepods-besteffort-pod122dd571_2b0a_4dcb_999c_239eab4ad99d.slice. May 27 17:38:40.272470 kubelet[2696]: I0527 17:38:40.272399 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5848\" (UniqueName: \"kubernetes.io/projected/122dd571-2b0a-4dcb-999c-239eab4ad99d-kube-api-access-p5848\") pod \"tigera-operator-844669ff44-wzxsk\" (UID: \"122dd571-2b0a-4dcb-999c-239eab4ad99d\") " pod="tigera-operator/tigera-operator-844669ff44-wzxsk" May 27 17:38:40.273193 kubelet[2696]: I0527 17:38:40.272526 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/122dd571-2b0a-4dcb-999c-239eab4ad99d-var-lib-calico\") pod \"tigera-operator-844669ff44-wzxsk\" (UID: \"122dd571-2b0a-4dcb-999c-239eab4ad99d\") " pod="tigera-operator/tigera-operator-844669ff44-wzxsk" May 27 17:38:40.531829 containerd[1555]: time="2025-05-27T17:38:40.531641242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-wzxsk,Uid:122dd571-2b0a-4dcb-999c-239eab4ad99d,Namespace:tigera-operator,Attempt:0,}" May 27 17:38:40.553968 containerd[1555]: time="2025-05-27T17:38:40.553909863Z" level=info msg="connecting to shim 0287eabf449952333b49378806c6b5652271b8e10717663857c97a10039bc7b5" address="unix:///run/containerd/s/45debb9d3f6d2514c43cc22048bf00052b26d847d9bb149139552bc7474aca6b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:40.581828 systemd[1]: Started cri-containerd-0287eabf449952333b49378806c6b5652271b8e10717663857c97a10039bc7b5.scope - libcontainer container 0287eabf449952333b49378806c6b5652271b8e10717663857c97a10039bc7b5. May 27 17:38:40.634619 containerd[1555]: time="2025-05-27T17:38:40.634550375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-wzxsk,Uid:122dd571-2b0a-4dcb-999c-239eab4ad99d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0287eabf449952333b49378806c6b5652271b8e10717663857c97a10039bc7b5\"" May 27 17:38:40.636227 containerd[1555]: time="2025-05-27T17:38:40.636203405Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:38:41.075817 kubelet[2696]: E0527 17:38:41.075760 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:41.090307 kubelet[2696]: I0527 17:38:41.090235 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p24wg" podStartSLOduration=2.090196286 podStartE2EDuration="2.090196286s" podCreationTimestamp="2025-05-27 17:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:38:41.089966498 +0000 UTC m=+7.156753153" watchObservedRunningTime="2025-05-27 17:38:41.090196286 +0000 UTC m=+7.156982921" May 27 17:38:42.078652 kubelet[2696]: E0527 17:38:42.078583 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:42.178186 kubelet[2696]: E0527 17:38:42.178143 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:42.645817 kubelet[2696]: E0527 17:38:42.645776 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:43.080084 kubelet[2696]: E0527 17:38:43.079980 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:43.080499 kubelet[2696]: E0527 17:38:43.080152 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:43.480787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554565420.mount: Deactivated successfully. May 27 17:38:44.081869 kubelet[2696]: E0527 17:38:44.081819 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:45.173411 update_engine[1543]: I20250527 17:38:45.173275 1543 update_attempter.cc:509] Updating boot flags... May 27 17:38:45.978345 containerd[1555]: time="2025-05-27T17:38:45.978285341Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:46.010247 containerd[1555]: time="2025-05-27T17:38:46.010178025Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 17:38:46.023148 containerd[1555]: time="2025-05-27T17:38:46.023084867Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:46.209275 containerd[1555]: time="2025-05-27T17:38:46.209223082Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:46.209787 containerd[1555]: time="2025-05-27T17:38:46.209745462Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 5.573513884s" May 27 17:38:46.209787 containerd[1555]: time="2025-05-27T17:38:46.209772303Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 17:38:46.211635 containerd[1555]: time="2025-05-27T17:38:46.211584399Z" level=info msg="CreateContainer within sandbox \"0287eabf449952333b49378806c6b5652271b8e10717663857c97a10039bc7b5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:38:46.307114 containerd[1555]: time="2025-05-27T17:38:46.306614203Z" level=info msg="Container 59345c53b2bb4c4cbf542655d324586e888785e7dd87a0ee5759fdee7d03dc34: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:46.345449 containerd[1555]: time="2025-05-27T17:38:46.345385036Z" level=info msg="CreateContainer within sandbox \"0287eabf449952333b49378806c6b5652271b8e10717663857c97a10039bc7b5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"59345c53b2bb4c4cbf542655d324586e888785e7dd87a0ee5759fdee7d03dc34\"" May 27 17:38:46.345886 containerd[1555]: time="2025-05-27T17:38:46.345845999Z" level=info msg="StartContainer for \"59345c53b2bb4c4cbf542655d324586e888785e7dd87a0ee5759fdee7d03dc34\"" May 27 17:38:46.346691 containerd[1555]: time="2025-05-27T17:38:46.346666495Z" level=info msg="connecting to shim 59345c53b2bb4c4cbf542655d324586e888785e7dd87a0ee5759fdee7d03dc34" address="unix:///run/containerd/s/45debb9d3f6d2514c43cc22048bf00052b26d847d9bb149139552bc7474aca6b" protocol=ttrpc version=3 May 27 17:38:46.406715 systemd[1]: Started cri-containerd-59345c53b2bb4c4cbf542655d324586e888785e7dd87a0ee5759fdee7d03dc34.scope - libcontainer container 59345c53b2bb4c4cbf542655d324586e888785e7dd87a0ee5759fdee7d03dc34. May 27 17:38:46.477079 containerd[1555]: time="2025-05-27T17:38:46.477020267Z" level=info msg="StartContainer for \"59345c53b2bb4c4cbf542655d324586e888785e7dd87a0ee5759fdee7d03dc34\" returns successfully" May 27 17:38:46.512127 kubelet[2696]: E0527 17:38:46.512089 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:47.087579 kubelet[2696]: E0527 17:38:47.087538 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:47.097631 kubelet[2696]: I0527 17:38:47.097255 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-wzxsk" podStartSLOduration=1.522474069 podStartE2EDuration="7.097227639s" podCreationTimestamp="2025-05-27 17:38:40 +0000 UTC" firstStartedPulling="2025-05-27 17:38:40.635671902 +0000 UTC m=+6.702458547" lastFinishedPulling="2025-05-27 17:38:46.210425472 +0000 UTC m=+12.277212117" observedRunningTime="2025-05-27 17:38:47.096556006 +0000 UTC m=+13.163342651" watchObservedRunningTime="2025-05-27 17:38:47.097227639 +0000 UTC m=+13.164014284" May 27 17:38:52.113925 sudo[1765]: pam_unix(sudo:session): session closed for user root May 27 17:38:52.115688 sshd[1764]: Connection closed by 10.0.0.1 port 32928 May 27 17:38:52.116880 sshd-session[1762]: pam_unix(sshd:session): session closed for user core May 27 17:38:52.123961 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:32928.service: Deactivated successfully. May 27 17:38:52.127735 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:38:52.128263 systemd[1]: session-7.scope: Consumed 5.277s CPU time, 225.2M memory peak. May 27 17:38:52.132966 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. May 27 17:38:52.135276 systemd-logind[1539]: Removed session 7. May 27 17:38:54.914657 systemd[1]: Created slice kubepods-besteffort-podc42a2786_a352_4f4b_a067_c47d3bcec22f.slice - libcontainer container kubepods-besteffort-podc42a2786_a352_4f4b_a067_c47d3bcec22f.slice. May 27 17:38:54.968633 kubelet[2696]: I0527 17:38:54.968225 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c42a2786-a352-4f4b-a067-c47d3bcec22f-tigera-ca-bundle\") pod \"calico-typha-67864c9877-kzlbx\" (UID: \"c42a2786-a352-4f4b-a067-c47d3bcec22f\") " pod="calico-system/calico-typha-67864c9877-kzlbx" May 27 17:38:54.968633 kubelet[2696]: I0527 17:38:54.968276 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcwq4\" (UniqueName: \"kubernetes.io/projected/c42a2786-a352-4f4b-a067-c47d3bcec22f-kube-api-access-wcwq4\") pod \"calico-typha-67864c9877-kzlbx\" (UID: \"c42a2786-a352-4f4b-a067-c47d3bcec22f\") " pod="calico-system/calico-typha-67864c9877-kzlbx" May 27 17:38:54.969180 kubelet[2696]: I0527 17:38:54.968293 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c42a2786-a352-4f4b-a067-c47d3bcec22f-typha-certs\") pod \"calico-typha-67864c9877-kzlbx\" (UID: \"c42a2786-a352-4f4b-a067-c47d3bcec22f\") " pod="calico-system/calico-typha-67864c9877-kzlbx" May 27 17:38:55.047381 systemd[1]: Created slice kubepods-besteffort-pod017dd9fe_9713_454d_ac64_64b3b58b0847.slice - libcontainer container kubepods-besteffort-pod017dd9fe_9713_454d_ac64_64b3b58b0847.slice. May 27 17:38:55.070616 kubelet[2696]: I0527 17:38:55.070210 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-cni-net-dir\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070616 kubelet[2696]: I0527 17:38:55.070252 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/017dd9fe-9713-454d-ac64-64b3b58b0847-tigera-ca-bundle\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070616 kubelet[2696]: I0527 17:38:55.070274 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-cni-log-dir\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070616 kubelet[2696]: I0527 17:38:55.070296 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-var-run-calico\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070616 kubelet[2696]: I0527 17:38:55.070314 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-xtables-lock\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070902 kubelet[2696]: I0527 17:38:55.070362 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-policysync\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070902 kubelet[2696]: I0527 17:38:55.070383 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-lib-modules\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070902 kubelet[2696]: I0527 17:38:55.070405 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/017dd9fe-9713-454d-ac64-64b3b58b0847-node-certs\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070902 kubelet[2696]: I0527 17:38:55.070424 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8wz\" (UniqueName: \"kubernetes.io/projected/017dd9fe-9713-454d-ac64-64b3b58b0847-kube-api-access-jl8wz\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.070902 kubelet[2696]: I0527 17:38:55.070458 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-cni-bin-dir\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.071054 kubelet[2696]: I0527 17:38:55.070481 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-var-lib-calico\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.071054 kubelet[2696]: I0527 17:38:55.070525 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/017dd9fe-9713-454d-ac64-64b3b58b0847-flexvol-driver-host\") pod \"calico-node-7ngmw\" (UID: \"017dd9fe-9713-454d-ac64-64b3b58b0847\") " pod="calico-system/calico-node-7ngmw" May 27 17:38:55.181298 kubelet[2696]: E0527 17:38:55.181262 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.181298 kubelet[2696]: W0527 17:38:55.181285 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.181461 kubelet[2696]: E0527 17:38:55.181324 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.220636 kubelet[2696]: E0527 17:38:55.220527 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:55.225682 containerd[1555]: time="2025-05-27T17:38:55.225609228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67864c9877-kzlbx,Uid:c42a2786-a352-4f4b-a067-c47d3bcec22f,Namespace:calico-system,Attempt:0,}" May 27 17:38:55.351298 containerd[1555]: time="2025-05-27T17:38:55.351235408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7ngmw,Uid:017dd9fe-9713-454d-ac64-64b3b58b0847,Namespace:calico-system,Attempt:0,}" May 27 17:38:55.436303 kubelet[2696]: E0527 17:38:55.435938 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2xzb" podUID="1a8befa0-930c-44c3-a3e5-53b9fdc761fb" May 27 17:38:55.455568 kubelet[2696]: E0527 17:38:55.455530 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.455568 kubelet[2696]: W0527 17:38:55.455554 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.455568 kubelet[2696]: E0527 17:38:55.455576 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.455807 kubelet[2696]: E0527 17:38:55.455787 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.455807 kubelet[2696]: W0527 17:38:55.455805 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.455854 kubelet[2696]: E0527 17:38:55.455814 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.456041 kubelet[2696]: E0527 17:38:55.456017 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.456041 kubelet[2696]: W0527 17:38:55.456028 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.456041 kubelet[2696]: E0527 17:38:55.456036 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.456250 kubelet[2696]: E0527 17:38:55.456234 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.456250 kubelet[2696]: W0527 17:38:55.456247 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.456323 kubelet[2696]: E0527 17:38:55.456255 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.456477 kubelet[2696]: E0527 17:38:55.456460 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.456477 kubelet[2696]: W0527 17:38:55.456471 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.456530 kubelet[2696]: E0527 17:38:55.456478 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.456674 kubelet[2696]: E0527 17:38:55.456658 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.456674 kubelet[2696]: W0527 17:38:55.456668 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.456737 kubelet[2696]: E0527 17:38:55.456678 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.456850 kubelet[2696]: E0527 17:38:55.456835 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.456850 kubelet[2696]: W0527 17:38:55.456845 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.456895 kubelet[2696]: E0527 17:38:55.456852 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.457034 kubelet[2696]: E0527 17:38:55.457007 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.457034 kubelet[2696]: W0527 17:38:55.457020 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.457034 kubelet[2696]: E0527 17:38:55.457027 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.457328 kubelet[2696]: E0527 17:38:55.457298 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.457328 kubelet[2696]: W0527 17:38:55.457328 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.457408 kubelet[2696]: E0527 17:38:55.457368 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.457701 kubelet[2696]: E0527 17:38:55.457674 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.457746 kubelet[2696]: W0527 17:38:55.457702 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.457746 kubelet[2696]: E0527 17:38:55.457715 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.457913 kubelet[2696]: E0527 17:38:55.457901 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.457913 kubelet[2696]: W0527 17:38:55.457913 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.457978 kubelet[2696]: E0527 17:38:55.457922 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.458103 kubelet[2696]: E0527 17:38:55.458090 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.458103 kubelet[2696]: W0527 17:38:55.458100 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.458171 kubelet[2696]: E0527 17:38:55.458108 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.458347 kubelet[2696]: E0527 17:38:55.458311 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.458347 kubelet[2696]: W0527 17:38:55.458320 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.458347 kubelet[2696]: E0527 17:38:55.458328 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.458530 kubelet[2696]: E0527 17:38:55.458507 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.458530 kubelet[2696]: W0527 17:38:55.458527 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.458577 kubelet[2696]: E0527 17:38:55.458535 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.458703 kubelet[2696]: E0527 17:38:55.458691 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.458703 kubelet[2696]: W0527 17:38:55.458700 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.458753 kubelet[2696]: E0527 17:38:55.458707 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.458876 kubelet[2696]: E0527 17:38:55.458863 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.458876 kubelet[2696]: W0527 17:38:55.458872 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.458923 kubelet[2696]: E0527 17:38:55.458880 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.459063 kubelet[2696]: E0527 17:38:55.459050 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.459063 kubelet[2696]: W0527 17:38:55.459059 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.459130 kubelet[2696]: E0527 17:38:55.459067 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.459245 kubelet[2696]: E0527 17:38:55.459233 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.459245 kubelet[2696]: W0527 17:38:55.459242 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.459296 kubelet[2696]: E0527 17:38:55.459250 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.459415 kubelet[2696]: E0527 17:38:55.459403 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.459415 kubelet[2696]: W0527 17:38:55.459412 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.459474 kubelet[2696]: E0527 17:38:55.459420 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.459580 kubelet[2696]: E0527 17:38:55.459568 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.459580 kubelet[2696]: W0527 17:38:55.459578 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.459653 kubelet[2696]: E0527 17:38:55.459588 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.474351 kubelet[2696]: E0527 17:38:55.474284 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.474351 kubelet[2696]: W0527 17:38:55.474340 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.474541 kubelet[2696]: E0527 17:38:55.474375 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.474541 kubelet[2696]: I0527 17:38:55.474426 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a8befa0-930c-44c3-a3e5-53b9fdc761fb-socket-dir\") pod \"csi-node-driver-v2xzb\" (UID: \"1a8befa0-930c-44c3-a3e5-53b9fdc761fb\") " pod="calico-system/csi-node-driver-v2xzb" May 27 17:38:55.474801 kubelet[2696]: E0527 17:38:55.474777 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.474801 kubelet[2696]: W0527 17:38:55.474794 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.474893 kubelet[2696]: E0527 17:38:55.474815 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.474893 kubelet[2696]: I0527 17:38:55.474834 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a8befa0-930c-44c3-a3e5-53b9fdc761fb-kubelet-dir\") pod \"csi-node-driver-v2xzb\" (UID: \"1a8befa0-930c-44c3-a3e5-53b9fdc761fb\") " pod="calico-system/csi-node-driver-v2xzb" May 27 17:38:55.476628 kubelet[2696]: E0527 17:38:55.476544 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.476628 kubelet[2696]: W0527 17:38:55.476572 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.476856 kubelet[2696]: E0527 17:38:55.476812 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.477270 kubelet[2696]: E0527 17:38:55.477230 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.477270 kubelet[2696]: W0527 17:38:55.477258 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.477372 kubelet[2696]: E0527 17:38:55.477276 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.477434 kubelet[2696]: I0527 17:38:55.477407 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a8befa0-930c-44c3-a3e5-53b9fdc761fb-registration-dir\") pod \"csi-node-driver-v2xzb\" (UID: \"1a8befa0-930c-44c3-a3e5-53b9fdc761fb\") " pod="calico-system/csi-node-driver-v2xzb" May 27 17:38:55.478368 kubelet[2696]: E0527 17:38:55.478252 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.478368 kubelet[2696]: W0527 17:38:55.478301 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.478368 kubelet[2696]: E0527 17:38:55.478323 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.478865 kubelet[2696]: E0527 17:38:55.478834 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.478865 kubelet[2696]: W0527 17:38:55.478857 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.478967 kubelet[2696]: E0527 17:38:55.478870 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.479089 kubelet[2696]: E0527 17:38:55.479059 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.479089 kubelet[2696]: W0527 17:38:55.479073 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.479089 kubelet[2696]: E0527 17:38:55.479081 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.479410 kubelet[2696]: E0527 17:38:55.479234 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.479410 kubelet[2696]: W0527 17:38:55.479241 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.479410 kubelet[2696]: E0527 17:38:55.479249 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.479410 kubelet[2696]: E0527 17:38:55.479403 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.479410 kubelet[2696]: W0527 17:38:55.479411 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.479655 kubelet[2696]: E0527 17:38:55.479420 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.479655 kubelet[2696]: E0527 17:38:55.479565 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.479655 kubelet[2696]: W0527 17:38:55.479571 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.479655 kubelet[2696]: E0527 17:38:55.479578 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.479655 kubelet[2696]: I0527 17:38:55.479623 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1a8befa0-930c-44c3-a3e5-53b9fdc761fb-varrun\") pod \"csi-node-driver-v2xzb\" (UID: \"1a8befa0-930c-44c3-a3e5-53b9fdc761fb\") " pod="calico-system/csi-node-driver-v2xzb" May 27 17:38:55.482814 kubelet[2696]: E0527 17:38:55.480668 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.482814 kubelet[2696]: W0527 17:38:55.480685 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.482814 kubelet[2696]: E0527 17:38:55.480706 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.482814 kubelet[2696]: E0527 17:38:55.480868 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.482814 kubelet[2696]: W0527 17:38:55.480876 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.482814 kubelet[2696]: E0527 17:38:55.480885 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.482814 kubelet[2696]: E0527 17:38:55.481040 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.482814 kubelet[2696]: W0527 17:38:55.481047 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.482814 kubelet[2696]: E0527 17:38:55.481056 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.483154 kubelet[2696]: I0527 17:38:55.481080 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th2mc\" (UniqueName: \"kubernetes.io/projected/1a8befa0-930c-44c3-a3e5-53b9fdc761fb-kube-api-access-th2mc\") pod \"csi-node-driver-v2xzb\" (UID: \"1a8befa0-930c-44c3-a3e5-53b9fdc761fb\") " pod="calico-system/csi-node-driver-v2xzb" May 27 17:38:55.483154 kubelet[2696]: E0527 17:38:55.482410 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.483154 kubelet[2696]: W0527 17:38:55.482425 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.483154 kubelet[2696]: E0527 17:38:55.482459 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.483154 kubelet[2696]: E0527 17:38:55.482671 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.483154 kubelet[2696]: W0527 17:38:55.482682 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.483154 kubelet[2696]: E0527 17:38:55.482691 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.512649 containerd[1555]: time="2025-05-27T17:38:55.511989404Z" level=info msg="connecting to shim d869a4d07686a1459d18b4a03262e202d6b9e2b2dbf45ba300071eda096239fd" address="unix:///run/containerd/s/08931c279a1401f5029c220dc66041a4c79e157be7a04d4ba106052ddac85b94" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:55.540863 systemd[1]: Started cri-containerd-d869a4d07686a1459d18b4a03262e202d6b9e2b2dbf45ba300071eda096239fd.scope - libcontainer container d869a4d07686a1459d18b4a03262e202d6b9e2b2dbf45ba300071eda096239fd. May 27 17:38:55.582280 kubelet[2696]: E0527 17:38:55.582245 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.582280 kubelet[2696]: W0527 17:38:55.582268 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.582280 kubelet[2696]: E0527 17:38:55.582287 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.582541 kubelet[2696]: E0527 17:38:55.582525 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.582541 kubelet[2696]: W0527 17:38:55.582537 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.582667 kubelet[2696]: E0527 17:38:55.582555 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.582847 kubelet[2696]: E0527 17:38:55.582783 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.582847 kubelet[2696]: W0527 17:38:55.582797 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.582847 kubelet[2696]: E0527 17:38:55.582830 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.583080 kubelet[2696]: E0527 17:38:55.583063 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.583080 kubelet[2696]: W0527 17:38:55.583074 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.583150 kubelet[2696]: E0527 17:38:55.583094 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.583345 kubelet[2696]: E0527 17:38:55.583318 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.583345 kubelet[2696]: W0527 17:38:55.583336 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.583406 kubelet[2696]: E0527 17:38:55.583349 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.583752 kubelet[2696]: E0527 17:38:55.583726 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.583752 kubelet[2696]: W0527 17:38:55.583741 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.583893 kubelet[2696]: E0527 17:38:55.583789 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.584044 kubelet[2696]: E0527 17:38:55.583950 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.584044 kubelet[2696]: W0527 17:38:55.583962 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.584044 kubelet[2696]: E0527 17:38:55.583992 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.584225 kubelet[2696]: E0527 17:38:55.584170 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.584225 kubelet[2696]: W0527 17:38:55.584179 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.584283 kubelet[2696]: E0527 17:38:55.584241 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.584476 kubelet[2696]: E0527 17:38:55.584371 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.584476 kubelet[2696]: W0527 17:38:55.584396 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.584476 kubelet[2696]: E0527 17:38:55.584439 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.584642 kubelet[2696]: E0527 17:38:55.584619 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.584694 kubelet[2696]: W0527 17:38:55.584655 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.584810 kubelet[2696]: E0527 17:38:55.584716 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.584844 kubelet[2696]: E0527 17:38:55.584834 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.584844 kubelet[2696]: W0527 17:38:55.584841 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.584909 kubelet[2696]: E0527 17:38:55.584855 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.585154 kubelet[2696]: E0527 17:38:55.585128 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.585217 kubelet[2696]: W0527 17:38:55.585154 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.585217 kubelet[2696]: E0527 17:38:55.585184 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.585476 kubelet[2696]: E0527 17:38:55.585459 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.585476 kubelet[2696]: W0527 17:38:55.585473 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.585545 kubelet[2696]: E0527 17:38:55.585513 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.585943 kubelet[2696]: E0527 17:38:55.585739 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.585943 kubelet[2696]: W0527 17:38:55.585758 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.585943 kubelet[2696]: E0527 17:38:55.585796 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.586029 kubelet[2696]: E0527 17:38:55.585973 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.586029 kubelet[2696]: W0527 17:38:55.585984 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.586074 kubelet[2696]: E0527 17:38:55.586028 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.586225 kubelet[2696]: E0527 17:38:55.586208 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.586225 kubelet[2696]: W0527 17:38:55.586224 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.586291 kubelet[2696]: E0527 17:38:55.586278 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.586530 kubelet[2696]: E0527 17:38:55.586490 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.586530 kubelet[2696]: W0527 17:38:55.586518 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.586586 kubelet[2696]: E0527 17:38:55.586561 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.586848 kubelet[2696]: E0527 17:38:55.586814 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.586848 kubelet[2696]: W0527 17:38:55.586840 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.587010 kubelet[2696]: E0527 17:38:55.586876 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.587255 kubelet[2696]: E0527 17:38:55.587234 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.587255 kubelet[2696]: W0527 17:38:55.587252 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.587327 kubelet[2696]: E0527 17:38:55.587268 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.587528 kubelet[2696]: E0527 17:38:55.587511 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.587528 kubelet[2696]: W0527 17:38:55.587522 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.587654 kubelet[2696]: E0527 17:38:55.587555 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.587789 kubelet[2696]: E0527 17:38:55.587772 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.587789 kubelet[2696]: W0527 17:38:55.587783 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.587846 kubelet[2696]: E0527 17:38:55.587809 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.587958 kubelet[2696]: E0527 17:38:55.587942 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.587958 kubelet[2696]: W0527 17:38:55.587952 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.588034 kubelet[2696]: E0527 17:38:55.587975 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.588141 kubelet[2696]: E0527 17:38:55.588125 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.588141 kubelet[2696]: W0527 17:38:55.588134 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.588197 kubelet[2696]: E0527 17:38:55.588148 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.588378 kubelet[2696]: E0527 17:38:55.588361 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.588378 kubelet[2696]: W0527 17:38:55.588370 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.588455 kubelet[2696]: E0527 17:38:55.588383 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.588566 kubelet[2696]: E0527 17:38:55.588551 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.588566 kubelet[2696]: W0527 17:38:55.588561 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.588566 kubelet[2696]: E0527 17:38:55.588568 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.622314 containerd[1555]: time="2025-05-27T17:38:55.622264077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67864c9877-kzlbx,Uid:c42a2786-a352-4f4b-a067-c47d3bcec22f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d869a4d07686a1459d18b4a03262e202d6b9e2b2dbf45ba300071eda096239fd\"" May 27 17:38:55.652295 containerd[1555]: time="2025-05-27T17:38:55.623859386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:38:55.652374 kubelet[2696]: E0527 17:38:55.623115 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:55.660324 kubelet[2696]: E0527 17:38:55.660269 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:55.660324 kubelet[2696]: W0527 17:38:55.660300 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:55.660324 kubelet[2696]: E0527 17:38:55.660320 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:55.727672 containerd[1555]: time="2025-05-27T17:38:55.726751730Z" level=info msg="connecting to shim 47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4" address="unix:///run/containerd/s/eb06ce6926e6d95ee7f84b4b58802e783b25660e43f31c17295a1110e8df7631" namespace=k8s.io protocol=ttrpc version=3 May 27 17:38:55.764013 systemd[1]: Started cri-containerd-47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4.scope - libcontainer container 47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4. May 27 17:38:55.825851 containerd[1555]: time="2025-05-27T17:38:55.825789719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7ngmw,Uid:017dd9fe-9713-454d-ac64-64b3b58b0847,Namespace:calico-system,Attempt:0,} returns sandbox id \"47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4\"" May 27 17:38:57.050008 kubelet[2696]: E0527 17:38:57.049792 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2xzb" podUID="1a8befa0-930c-44c3-a3e5-53b9fdc761fb" May 27 17:38:57.092258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543700394.mount: Deactivated successfully. May 27 17:38:57.710395 containerd[1555]: time="2025-05-27T17:38:57.710338934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:57.711383 containerd[1555]: time="2025-05-27T17:38:57.711354489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 17:38:57.713813 containerd[1555]: time="2025-05-27T17:38:57.713776776Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:57.720667 containerd[1555]: time="2025-05-27T17:38:57.720636430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:57.721573 containerd[1555]: time="2025-05-27T17:38:57.721473508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.09754433s" May 27 17:38:57.721573 containerd[1555]: time="2025-05-27T17:38:57.721523513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 17:38:57.722974 containerd[1555]: time="2025-05-27T17:38:57.722934814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:38:57.732870 containerd[1555]: time="2025-05-27T17:38:57.732803852Z" level=info msg="CreateContainer within sandbox \"d869a4d07686a1459d18b4a03262e202d6b9e2b2dbf45ba300071eda096239fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:38:57.742789 containerd[1555]: time="2025-05-27T17:38:57.742726361Z" level=info msg="Container 8c4c8087b0cede30160be06c10ae5b833947456706eda4077e46e6af32a8fc8b: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:57.754492 containerd[1555]: time="2025-05-27T17:38:57.754425872Z" level=info msg="CreateContainer within sandbox \"d869a4d07686a1459d18b4a03262e202d6b9e2b2dbf45ba300071eda096239fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8c4c8087b0cede30160be06c10ae5b833947456706eda4077e46e6af32a8fc8b\"" May 27 17:38:57.755093 containerd[1555]: time="2025-05-27T17:38:57.755036483Z" level=info msg="StartContainer for \"8c4c8087b0cede30160be06c10ae5b833947456706eda4077e46e6af32a8fc8b\"" May 27 17:38:57.756382 containerd[1555]: time="2025-05-27T17:38:57.756341924Z" level=info msg="connecting to shim 8c4c8087b0cede30160be06c10ae5b833947456706eda4077e46e6af32a8fc8b" address="unix:///run/containerd/s/08931c279a1401f5029c220dc66041a4c79e157be7a04d4ba106052ddac85b94" protocol=ttrpc version=3 May 27 17:38:57.783895 systemd[1]: Started cri-containerd-8c4c8087b0cede30160be06c10ae5b833947456706eda4077e46e6af32a8fc8b.scope - libcontainer container 8c4c8087b0cede30160be06c10ae5b833947456706eda4077e46e6af32a8fc8b. May 27 17:38:57.848728 containerd[1555]: time="2025-05-27T17:38:57.848657430Z" level=info msg="StartContainer for \"8c4c8087b0cede30160be06c10ae5b833947456706eda4077e46e6af32a8fc8b\" returns successfully" May 27 17:38:58.112288 kubelet[2696]: E0527 17:38:58.112172 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:58.122671 kubelet[2696]: I0527 17:38:58.122538 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67864c9877-kzlbx" podStartSLOduration=2.023645545 podStartE2EDuration="4.122518621s" podCreationTimestamp="2025-05-27 17:38:54 +0000 UTC" firstStartedPulling="2025-05-27 17:38:55.623620927 +0000 UTC m=+21.690407572" lastFinishedPulling="2025-05-27 17:38:57.722494003 +0000 UTC m=+23.789280648" observedRunningTime="2025-05-27 17:38:58.122179431 +0000 UTC m=+24.188966076" watchObservedRunningTime="2025-05-27 17:38:58.122518621 +0000 UTC m=+24.189305256" May 27 17:38:58.177500 kubelet[2696]: E0527 17:38:58.177452 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.177500 kubelet[2696]: W0527 17:38:58.177479 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.177500 kubelet[2696]: E0527 17:38:58.177502 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.177780 kubelet[2696]: E0527 17:38:58.177742 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.177780 kubelet[2696]: W0527 17:38:58.177750 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.177780 kubelet[2696]: E0527 17:38:58.177758 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.177963 kubelet[2696]: E0527 17:38:58.177937 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.177963 kubelet[2696]: W0527 17:38:58.177948 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.177963 kubelet[2696]: E0527 17:38:58.177955 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.178137 kubelet[2696]: E0527 17:38:58.178114 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.178137 kubelet[2696]: W0527 17:38:58.178124 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.178137 kubelet[2696]: E0527 17:38:58.178131 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.178351 kubelet[2696]: E0527 17:38:58.178326 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.178351 kubelet[2696]: W0527 17:38:58.178338 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.178351 kubelet[2696]: E0527 17:38:58.178345 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.178530 kubelet[2696]: E0527 17:38:58.178513 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.178530 kubelet[2696]: W0527 17:38:58.178524 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.178612 kubelet[2696]: E0527 17:38:58.178533 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.178704 kubelet[2696]: E0527 17:38:58.178689 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.178704 kubelet[2696]: W0527 17:38:58.178698 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.178771 kubelet[2696]: E0527 17:38:58.178705 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.178865 kubelet[2696]: E0527 17:38:58.178850 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.178865 kubelet[2696]: W0527 17:38:58.178859 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.178865 kubelet[2696]: E0527 17:38:58.178866 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.179038 kubelet[2696]: E0527 17:38:58.179024 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.179038 kubelet[2696]: W0527 17:38:58.179033 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.179110 kubelet[2696]: E0527 17:38:58.179041 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.179204 kubelet[2696]: E0527 17:38:58.179189 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.179204 kubelet[2696]: W0527 17:38:58.179198 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.179204 kubelet[2696]: E0527 17:38:58.179205 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.179384 kubelet[2696]: E0527 17:38:58.179368 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.179384 kubelet[2696]: W0527 17:38:58.179378 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.179384 kubelet[2696]: E0527 17:38:58.179386 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.179554 kubelet[2696]: E0527 17:38:58.179537 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.179554 kubelet[2696]: W0527 17:38:58.179548 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.179657 kubelet[2696]: E0527 17:38:58.179556 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.179756 kubelet[2696]: E0527 17:38:58.179741 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.179756 kubelet[2696]: W0527 17:38:58.179750 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.179815 kubelet[2696]: E0527 17:38:58.179760 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.179914 kubelet[2696]: E0527 17:38:58.179900 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.179914 kubelet[2696]: W0527 17:38:58.179911 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.179975 kubelet[2696]: E0527 17:38:58.179919 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.180085 kubelet[2696]: E0527 17:38:58.180070 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.180085 kubelet[2696]: W0527 17:38:58.180079 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.180085 kubelet[2696]: E0527 17:38:58.180087 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.203660 kubelet[2696]: E0527 17:38:58.203581 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.203660 kubelet[2696]: W0527 17:38:58.203636 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.203660 kubelet[2696]: E0527 17:38:58.203663 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.203941 kubelet[2696]: E0527 17:38:58.203901 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.203941 kubelet[2696]: W0527 17:38:58.203922 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.204012 kubelet[2696]: E0527 17:38:58.203954 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.204229 kubelet[2696]: E0527 17:38:58.204212 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.204229 kubelet[2696]: W0527 17:38:58.204224 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.204329 kubelet[2696]: E0527 17:38:58.204248 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.204503 kubelet[2696]: E0527 17:38:58.204473 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.204503 kubelet[2696]: W0527 17:38:58.204490 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.204605 kubelet[2696]: E0527 17:38:58.204510 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.204798 kubelet[2696]: E0527 17:38:58.204761 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.204798 kubelet[2696]: W0527 17:38:58.204772 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.204798 kubelet[2696]: E0527 17:38:58.204785 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.205045 kubelet[2696]: E0527 17:38:58.205028 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.205083 kubelet[2696]: W0527 17:38:58.205049 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.205083 kubelet[2696]: E0527 17:38:58.205081 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.205327 kubelet[2696]: E0527 17:38:58.205311 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.205327 kubelet[2696]: W0527 17:38:58.205322 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.205401 kubelet[2696]: E0527 17:38:58.205335 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.205557 kubelet[2696]: E0527 17:38:58.205531 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.205557 kubelet[2696]: W0527 17:38:58.205551 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.205696 kubelet[2696]: E0527 17:38:58.205608 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.205874 kubelet[2696]: E0527 17:38:58.205855 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.205874 kubelet[2696]: W0527 17:38:58.205866 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.205955 kubelet[2696]: E0527 17:38:58.205892 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.206083 kubelet[2696]: E0527 17:38:58.206043 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.206083 kubelet[2696]: W0527 17:38:58.206077 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.206159 kubelet[2696]: E0527 17:38:58.206102 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.206369 kubelet[2696]: E0527 17:38:58.206350 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.206369 kubelet[2696]: W0527 17:38:58.206362 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.206437 kubelet[2696]: E0527 17:38:58.206383 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.206650 kubelet[2696]: E0527 17:38:58.206633 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.206650 kubelet[2696]: W0527 17:38:58.206645 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.206730 kubelet[2696]: E0527 17:38:58.206660 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.206845 kubelet[2696]: E0527 17:38:58.206829 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.206845 kubelet[2696]: W0527 17:38:58.206838 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.206929 kubelet[2696]: E0527 17:38:58.206851 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.207059 kubelet[2696]: E0527 17:38:58.207032 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.207059 kubelet[2696]: W0527 17:38:58.207047 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.207131 kubelet[2696]: E0527 17:38:58.207062 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.207255 kubelet[2696]: E0527 17:38:58.207239 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.207255 kubelet[2696]: W0527 17:38:58.207249 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.207357 kubelet[2696]: E0527 17:38:58.207262 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.207506 kubelet[2696]: E0527 17:38:58.207486 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.207506 kubelet[2696]: W0527 17:38:58.207500 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.207613 kubelet[2696]: E0527 17:38:58.207516 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.207736 kubelet[2696]: E0527 17:38:58.207719 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.207736 kubelet[2696]: W0527 17:38:58.207730 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.207801 kubelet[2696]: E0527 17:38:58.207742 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.207915 kubelet[2696]: E0527 17:38:58.207899 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:38:58.207915 kubelet[2696]: W0527 17:38:58.207909 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:38:58.207915 kubelet[2696]: E0527 17:38:58.207917 2696 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:38:58.992036 containerd[1555]: time="2025-05-27T17:38:58.991959840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:58.992913 containerd[1555]: time="2025-05-27T17:38:58.992846853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 17:38:58.994419 containerd[1555]: time="2025-05-27T17:38:58.994377599Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:58.996952 containerd[1555]: time="2025-05-27T17:38:58.996901005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:38:58.997585 containerd[1555]: time="2025-05-27T17:38:58.997533156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.274557705s" May 27 17:38:58.997585 containerd[1555]: time="2025-05-27T17:38:58.997570276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 17:38:59.000575 containerd[1555]: time="2025-05-27T17:38:59.000538271Z" level=info msg="CreateContainer within sandbox \"47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:38:59.011565 containerd[1555]: time="2025-05-27T17:38:59.011495698Z" level=info msg="Container d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4: CDI devices from CRI Config.CDIDevices: []" May 27 17:38:59.023674 containerd[1555]: time="2025-05-27T17:38:59.023467576Z" level=info msg="CreateContainer within sandbox \"47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4\"" May 27 17:38:59.024729 containerd[1555]: time="2025-05-27T17:38:59.024646859Z" level=info msg="StartContainer for \"d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4\"" May 27 17:38:59.027309 containerd[1555]: time="2025-05-27T17:38:59.027247589Z" level=info msg="connecting to shim d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4" address="unix:///run/containerd/s/eb06ce6926e6d95ee7f84b4b58802e783b25660e43f31c17295a1110e8df7631" protocol=ttrpc version=3 May 27 17:38:59.049528 kubelet[2696]: E0527 17:38:59.049464 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2xzb" podUID="1a8befa0-930c-44c3-a3e5-53b9fdc761fb" May 27 17:38:59.055802 systemd[1]: Started cri-containerd-d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4.scope - libcontainer container d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4. May 27 17:38:59.115770 containerd[1555]: time="2025-05-27T17:38:59.115022865Z" level=info msg="StartContainer for \"d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4\" returns successfully" May 27 17:38:59.117952 kubelet[2696]: I0527 17:38:59.117909 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:38:59.120179 kubelet[2696]: E0527 17:38:59.118329 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:38:59.126193 systemd[1]: cri-containerd-d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4.scope: Deactivated successfully. May 27 17:38:59.128382 containerd[1555]: time="2025-05-27T17:38:59.128343556Z" level=info msg="received exit event container_id:\"d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4\" id:\"d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4\" pid:3387 exited_at:{seconds:1748367539 nanos:127944354}" May 27 17:38:59.128591 containerd[1555]: time="2025-05-27T17:38:59.128498077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4\" id:\"d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4\" pid:3387 exited_at:{seconds:1748367539 nanos:127944354}" May 27 17:38:59.153876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39809388799748cb68746cc5955b9a618c0ffaeed514f522ede4349ac062ae4-rootfs.mount: Deactivated successfully. May 27 17:39:00.125086 containerd[1555]: time="2025-05-27T17:39:00.125032812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:39:01.050114 kubelet[2696]: E0527 17:39:01.050048 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2xzb" podUID="1a8befa0-930c-44c3-a3e5-53b9fdc761fb" May 27 17:39:03.051010 kubelet[2696]: E0527 17:39:03.050660 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2xzb" podUID="1a8befa0-930c-44c3-a3e5-53b9fdc761fb" May 27 17:39:03.505362 containerd[1555]: time="2025-05-27T17:39:03.505297423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:03.506695 containerd[1555]: time="2025-05-27T17:39:03.506591559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 17:39:03.507836 containerd[1555]: time="2025-05-27T17:39:03.507796367Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:03.509797 containerd[1555]: time="2025-05-27T17:39:03.509766005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:03.510345 containerd[1555]: time="2025-05-27T17:39:03.510313986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.385238473s" May 27 17:39:03.510381 containerd[1555]: time="2025-05-27T17:39:03.510347629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 17:39:03.512673 containerd[1555]: time="2025-05-27T17:39:03.512638091Z" level=info msg="CreateContainer within sandbox \"47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:39:03.523740 containerd[1555]: time="2025-05-27T17:39:03.523683801Z" level=info msg="Container 81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:03.534230 containerd[1555]: time="2025-05-27T17:39:03.534178295Z" level=info msg="CreateContainer within sandbox \"47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d\"" May 27 17:39:03.534795 containerd[1555]: time="2025-05-27T17:39:03.534753718Z" level=info msg="StartContainer for \"81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d\"" May 27 17:39:03.536538 containerd[1555]: time="2025-05-27T17:39:03.536510435Z" level=info msg="connecting to shim 81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d" address="unix:///run/containerd/s/eb06ce6926e6d95ee7f84b4b58802e783b25660e43f31c17295a1110e8df7631" protocol=ttrpc version=3 May 27 17:39:03.559779 systemd[1]: Started cri-containerd-81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d.scope - libcontainer container 81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d. May 27 17:39:03.605905 containerd[1555]: time="2025-05-27T17:39:03.605856193Z" level=info msg="StartContainer for \"81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d\" returns successfully" May 27 17:39:04.578206 containerd[1555]: time="2025-05-27T17:39:04.578136633Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:39:04.581630 systemd[1]: cri-containerd-81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d.scope: Deactivated successfully. May 27 17:39:04.582289 containerd[1555]: time="2025-05-27T17:39:04.581718904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d\" id:\"81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d\" pid:3447 exited_at:{seconds:1748367544 nanos:581373394}" May 27 17:39:04.582289 containerd[1555]: time="2025-05-27T17:39:04.581724425Z" level=info msg="received exit event container_id:\"81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d\" id:\"81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d\" pid:3447 exited_at:{seconds:1748367544 nanos:581373394}" May 27 17:39:04.581996 systemd[1]: cri-containerd-81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d.scope: Consumed 581ms CPU time, 180.9M memory peak, 2.9M read from disk, 170.9M written to disk. May 27 17:39:04.604293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81d074c69c1a9067225de18abdd03fecff8014a32443ced3ad384b12c481da4d-rootfs.mount: Deactivated successfully. May 27 17:39:04.676020 kubelet[2696]: I0527 17:39:04.675152 2696 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:39:04.813210 systemd[1]: Created slice kubepods-besteffort-podb888eacd_de82_4555_a15d_4345439f3f57.slice - libcontainer container kubepods-besteffort-podb888eacd_de82_4555_a15d_4345439f3f57.slice. May 27 17:39:04.824806 systemd[1]: Created slice kubepods-burstable-pod2c10f117_d5f4_4217_a561_a8842ec090ba.slice - libcontainer container kubepods-burstable-pod2c10f117_d5f4_4217_a561_a8842ec090ba.slice. May 27 17:39:04.831384 systemd[1]: Created slice kubepods-besteffort-podbc5e9290_4a3a_4633_af11_d46d40c33905.slice - libcontainer container kubepods-besteffort-podbc5e9290_4a3a_4633_af11_d46d40c33905.slice. May 27 17:39:04.837859 systemd[1]: Created slice kubepods-besteffort-pod6b0dd1bc_0206_4f1e_9bbf_2f55ec102343.slice - libcontainer container kubepods-besteffort-pod6b0dd1bc_0206_4f1e_9bbf_2f55ec102343.slice. May 27 17:39:04.842580 systemd[1]: Created slice kubepods-besteffort-pod1e33f0f8_4ca3_40e9_893d_92f7065bb1f1.slice - libcontainer container kubepods-besteffort-pod1e33f0f8_4ca3_40e9_893d_92f7065bb1f1.slice. May 27 17:39:04.847500 systemd[1]: Created slice kubepods-besteffort-pod18982c18_ea20_425e_ae4b_4b49d57db0c3.slice - libcontainer container kubepods-besteffort-pod18982c18_ea20_425e_ae4b_4b49d57db0c3.slice. May 27 17:39:04.852819 systemd[1]: Created slice kubepods-burstable-pod7cc74db5_ee17_4eec_9986_c451d99762ba.slice - libcontainer container kubepods-burstable-pod7cc74db5_ee17_4eec_9986_c451d99762ba.slice. May 27 17:39:04.859321 systemd[1]: Created slice kubepods-besteffort-poda73d802b_0827_4fc9_87c5_8c54c8267e43.slice - libcontainer container kubepods-besteffort-poda73d802b_0827_4fc9_87c5_8c54c8267e43.slice. May 27 17:39:04.873863 kubelet[2696]: I0527 17:39:04.873829 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b0dd1bc-0206-4f1e-9bbf-2f55ec102343-tigera-ca-bundle\") pod \"calico-kube-controllers-7944959bbc-8rhtl\" (UID: \"6b0dd1bc-0206-4f1e-9bbf-2f55ec102343\") " pod="calico-system/calico-kube-controllers-7944959bbc-8rhtl" May 27 17:39:04.873863 kubelet[2696]: I0527 17:39:04.873867 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjsb\" (UniqueName: \"kubernetes.io/projected/6b0dd1bc-0206-4f1e-9bbf-2f55ec102343-kube-api-access-tbjsb\") pod \"calico-kube-controllers-7944959bbc-8rhtl\" (UID: \"6b0dd1bc-0206-4f1e-9bbf-2f55ec102343\") " pod="calico-system/calico-kube-controllers-7944959bbc-8rhtl" May 27 17:39:04.874007 kubelet[2696]: I0527 17:39:04.873886 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc5e9290-4a3a-4633-af11-d46d40c33905-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-5nh2v\" (UID: \"bc5e9290-4a3a-4633-af11-d46d40c33905\") " pod="calico-system/goldmane-78d55f7ddc-5nh2v" May 27 17:39:04.874007 kubelet[2696]: I0527 17:39:04.873902 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfk8h\" (UniqueName: \"kubernetes.io/projected/bc5e9290-4a3a-4633-af11-d46d40c33905-kube-api-access-rfk8h\") pod \"goldmane-78d55f7ddc-5nh2v\" (UID: \"bc5e9290-4a3a-4633-af11-d46d40c33905\") " pod="calico-system/goldmane-78d55f7ddc-5nh2v" May 27 17:39:04.874007 kubelet[2696]: I0527 17:39:04.873919 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cc74db5-ee17-4eec-9986-c451d99762ba-config-volume\") pod \"coredns-668d6bf9bc-c58l4\" (UID: \"7cc74db5-ee17-4eec-9986-c451d99762ba\") " pod="kube-system/coredns-668d6bf9bc-c58l4" May 27 17:39:04.874007 kubelet[2696]: I0527 17:39:04.873933 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c10f117-d5f4-4217-a561-a8842ec090ba-config-volume\") pod \"coredns-668d6bf9bc-cnqdb\" (UID: \"2c10f117-d5f4-4217-a561-a8842ec090ba\") " pod="kube-system/coredns-668d6bf9bc-cnqdb" May 27 17:39:04.874007 kubelet[2696]: I0527 17:39:04.873971 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7djd\" (UniqueName: \"kubernetes.io/projected/7cc74db5-ee17-4eec-9986-c451d99762ba-kube-api-access-d7djd\") pod \"coredns-668d6bf9bc-c58l4\" (UID: \"7cc74db5-ee17-4eec-9986-c451d99762ba\") " pod="kube-system/coredns-668d6bf9bc-c58l4" May 27 17:39:04.874139 kubelet[2696]: I0527 17:39:04.874027 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b888eacd-de82-4555-a15d-4345439f3f57-whisker-backend-key-pair\") pod \"whisker-67c68ddb4b-k4dr6\" (UID: \"b888eacd-de82-4555-a15d-4345439f3f57\") " pod="calico-system/whisker-67c68ddb4b-k4dr6" May 27 17:39:04.874139 kubelet[2696]: I0527 17:39:04.874066 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp2gx\" (UniqueName: \"kubernetes.io/projected/b888eacd-de82-4555-a15d-4345439f3f57-kube-api-access-xp2gx\") pod \"whisker-67c68ddb4b-k4dr6\" (UID: \"b888eacd-de82-4555-a15d-4345439f3f57\") " pod="calico-system/whisker-67c68ddb4b-k4dr6" May 27 17:39:04.874139 kubelet[2696]: I0527 17:39:04.874102 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-calico-apiserver-certs\") pod \"calico-apiserver-77658bc8b9-7hcld\" (UID: \"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1\") " pod="calico-apiserver/calico-apiserver-77658bc8b9-7hcld" May 27 17:39:04.874139 kubelet[2696]: I0527 17:39:04.874124 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18982c18-ea20-425e-ae4b-4b49d57db0c3-calico-apiserver-certs\") pod \"calico-apiserver-77658bc8b9-pns44\" (UID: \"18982c18-ea20-425e-ae4b-4b49d57db0c3\") " pod="calico-apiserver/calico-apiserver-77658bc8b9-pns44" May 27 17:39:04.874241 kubelet[2696]: I0527 17:39:04.874143 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4nsj\" (UniqueName: \"kubernetes.io/projected/2c10f117-d5f4-4217-a561-a8842ec090ba-kube-api-access-q4nsj\") pod \"coredns-668d6bf9bc-cnqdb\" (UID: \"2c10f117-d5f4-4217-a561-a8842ec090ba\") " pod="kube-system/coredns-668d6bf9bc-cnqdb" May 27 17:39:04.874241 kubelet[2696]: I0527 17:39:04.874164 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bc5e9290-4a3a-4633-af11-d46d40c33905-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-5nh2v\" (UID: \"bc5e9290-4a3a-4633-af11-d46d40c33905\") " pod="calico-system/goldmane-78d55f7ddc-5nh2v" May 27 17:39:04.874241 kubelet[2696]: I0527 17:39:04.874185 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvksc\" (UniqueName: \"kubernetes.io/projected/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-kube-api-access-wvksc\") pod \"calico-apiserver-77658bc8b9-7hcld\" (UID: \"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1\") " pod="calico-apiserver/calico-apiserver-77658bc8b9-7hcld" May 27 17:39:04.874241 kubelet[2696]: I0527 17:39:04.874218 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrsl\" (UniqueName: \"kubernetes.io/projected/18982c18-ea20-425e-ae4b-4b49d57db0c3-kube-api-access-ftrsl\") pod \"calico-apiserver-77658bc8b9-pns44\" (UID: \"18982c18-ea20-425e-ae4b-4b49d57db0c3\") " pod="calico-apiserver/calico-apiserver-77658bc8b9-pns44" May 27 17:39:04.874337 kubelet[2696]: I0527 17:39:04.874240 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a73d802b-0827-4fc9-87c5-8c54c8267e43-calico-apiserver-certs\") pod \"calico-apiserver-6888b85474-tvsvp\" (UID: \"a73d802b-0827-4fc9-87c5-8c54c8267e43\") " pod="calico-apiserver/calico-apiserver-6888b85474-tvsvp" May 27 17:39:04.874337 kubelet[2696]: I0527 17:39:04.874260 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkcb4\" (UniqueName: \"kubernetes.io/projected/a73d802b-0827-4fc9-87c5-8c54c8267e43-kube-api-access-mkcb4\") pod \"calico-apiserver-6888b85474-tvsvp\" (UID: \"a73d802b-0827-4fc9-87c5-8c54c8267e43\") " pod="calico-apiserver/calico-apiserver-6888b85474-tvsvp" May 27 17:39:04.874337 kubelet[2696]: I0527 17:39:04.874279 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b888eacd-de82-4555-a15d-4345439f3f57-whisker-ca-bundle\") pod \"whisker-67c68ddb4b-k4dr6\" (UID: \"b888eacd-de82-4555-a15d-4345439f3f57\") " pod="calico-system/whisker-67c68ddb4b-k4dr6" May 27 17:39:04.874337 kubelet[2696]: I0527 17:39:04.874300 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc5e9290-4a3a-4633-af11-d46d40c33905-config\") pod \"goldmane-78d55f7ddc-5nh2v\" (UID: \"bc5e9290-4a3a-4633-af11-d46d40c33905\") " pod="calico-system/goldmane-78d55f7ddc-5nh2v" May 27 17:39:05.055949 systemd[1]: Created slice kubepods-besteffort-pod1a8befa0_930c_44c3_a3e5_53b9fdc761fb.slice - libcontainer container kubepods-besteffort-pod1a8befa0_930c_44c3_a3e5_53b9fdc761fb.slice. May 27 17:39:05.058470 containerd[1555]: time="2025-05-27T17:39:05.058418557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2xzb,Uid:1a8befa0-930c-44c3-a3e5-53b9fdc761fb,Namespace:calico-system,Attempt:0,}" May 27 17:39:05.123542 containerd[1555]: time="2025-05-27T17:39:05.123078089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67c68ddb4b-k4dr6,Uid:b888eacd-de82-4555-a15d-4345439f3f57,Namespace:calico-system,Attempt:0,}" May 27 17:39:05.128936 kubelet[2696]: E0527 17:39:05.128883 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:05.129561 containerd[1555]: time="2025-05-27T17:39:05.129525599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cnqdb,Uid:2c10f117-d5f4-4217-a561-a8842ec090ba,Namespace:kube-system,Attempt:0,}" May 27 17:39:05.136010 containerd[1555]: time="2025-05-27T17:39:05.135856270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5nh2v,Uid:bc5e9290-4a3a-4633-af11-d46d40c33905,Namespace:calico-system,Attempt:0,}" May 27 17:39:05.141146 containerd[1555]: time="2025-05-27T17:39:05.141015837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944959bbc-8rhtl,Uid:6b0dd1bc-0206-4f1e-9bbf-2f55ec102343,Namespace:calico-system,Attempt:0,}" May 27 17:39:05.142362 containerd[1555]: time="2025-05-27T17:39:05.142330430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:39:05.145091 containerd[1555]: time="2025-05-27T17:39:05.145063583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-7hcld,Uid:1e33f0f8-4ca3-40e9-893d-92f7065bb1f1,Namespace:calico-apiserver,Attempt:0,}" May 27 17:39:05.150934 containerd[1555]: time="2025-05-27T17:39:05.150840001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-pns44,Uid:18982c18-ea20-425e-ae4b-4b49d57db0c3,Namespace:calico-apiserver,Attempt:0,}" May 27 17:39:05.154047 containerd[1555]: time="2025-05-27T17:39:05.153999836Z" level=error msg="Failed to destroy network for sandbox \"ba455c1f6caf2284dd3c81877defac1327702fbb9eff788ba898feea2589b87d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.155264 kubelet[2696]: E0527 17:39:05.155218 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:05.155657 containerd[1555]: time="2025-05-27T17:39:05.155623070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c58l4,Uid:7cc74db5-ee17-4eec-9986-c451d99762ba,Namespace:kube-system,Attempt:0,}" May 27 17:39:05.165797 containerd[1555]: time="2025-05-27T17:39:05.165545779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888b85474-tvsvp,Uid:a73d802b-0827-4fc9-87c5-8c54c8267e43,Namespace:calico-apiserver,Attempt:0,}" May 27 17:39:05.228059 containerd[1555]: time="2025-05-27T17:39:05.227974475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2xzb,Uid:1a8befa0-930c-44c3-a3e5-53b9fdc761fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba455c1f6caf2284dd3c81877defac1327702fbb9eff788ba898feea2589b87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.229224 kubelet[2696]: E0527 17:39:05.229154 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba455c1f6caf2284dd3c81877defac1327702fbb9eff788ba898feea2589b87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.229302 kubelet[2696]: E0527 17:39:05.229264 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba455c1f6caf2284dd3c81877defac1327702fbb9eff788ba898feea2589b87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v2xzb" May 27 17:39:05.229334 kubelet[2696]: E0527 17:39:05.229302 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba455c1f6caf2284dd3c81877defac1327702fbb9eff788ba898feea2589b87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v2xzb" May 27 17:39:05.229435 kubelet[2696]: E0527 17:39:05.229366 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v2xzb_calico-system(1a8befa0-930c-44c3-a3e5-53b9fdc761fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v2xzb_calico-system(1a8befa0-930c-44c3-a3e5-53b9fdc761fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba455c1f6caf2284dd3c81877defac1327702fbb9eff788ba898feea2589b87d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v2xzb" podUID="1a8befa0-930c-44c3-a3e5-53b9fdc761fb" May 27 17:39:05.335749 containerd[1555]: time="2025-05-27T17:39:05.335614062Z" level=error msg="Failed to destroy network for sandbox \"0d0a365e4fb158fd22a6a52061ad9a80cd8a4fa7535103d922cf1ec200a89188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.341776 containerd[1555]: time="2025-05-27T17:39:05.341729167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67c68ddb4b-k4dr6,Uid:b888eacd-de82-4555-a15d-4345439f3f57,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a365e4fb158fd22a6a52061ad9a80cd8a4fa7535103d922cf1ec200a89188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.342532 kubelet[2696]: E0527 17:39:05.342216 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a365e4fb158fd22a6a52061ad9a80cd8a4fa7535103d922cf1ec200a89188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.342532 kubelet[2696]: E0527 17:39:05.342295 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a365e4fb158fd22a6a52061ad9a80cd8a4fa7535103d922cf1ec200a89188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67c68ddb4b-k4dr6" May 27 17:39:05.342532 kubelet[2696]: E0527 17:39:05.342317 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a365e4fb158fd22a6a52061ad9a80cd8a4fa7535103d922cf1ec200a89188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67c68ddb4b-k4dr6" May 27 17:39:05.342667 kubelet[2696]: E0527 17:39:05.342374 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67c68ddb4b-k4dr6_calico-system(b888eacd-de82-4555-a15d-4345439f3f57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67c68ddb4b-k4dr6_calico-system(b888eacd-de82-4555-a15d-4345439f3f57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d0a365e4fb158fd22a6a52061ad9a80cd8a4fa7535103d922cf1ec200a89188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67c68ddb4b-k4dr6" podUID="b888eacd-de82-4555-a15d-4345439f3f57" May 27 17:39:05.389987 containerd[1555]: time="2025-05-27T17:39:05.389827952Z" level=error msg="Failed to destroy network for sandbox \"f319ec1cc1220b17be30386e8913b5228c7a624036d2e128c50c022ea7fe1b1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.391368 containerd[1555]: time="2025-05-27T17:39:05.391272249Z" level=error msg="Failed to destroy network for sandbox \"1d60d48419f13d2767d7d3a519305c15e0e03aeff7165577f03a7294bcf4519a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.393806 containerd[1555]: time="2025-05-27T17:39:05.393757946Z" level=error msg="Failed to destroy network for sandbox \"3c89fae670a675823a35cad1e3c53e478ec1b352a5de70c3e1c41377806c90ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.395428 containerd[1555]: time="2025-05-27T17:39:05.394270751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888b85474-tvsvp,Uid:a73d802b-0827-4fc9-87c5-8c54c8267e43,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f319ec1cc1220b17be30386e8913b5228c7a624036d2e128c50c022ea7fe1b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.395428 containerd[1555]: time="2025-05-27T17:39:05.395275000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944959bbc-8rhtl,Uid:6b0dd1bc-0206-4f1e-9bbf-2f55ec102343,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d60d48419f13d2767d7d3a519305c15e0e03aeff7165577f03a7294bcf4519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.395587 kubelet[2696]: E0527 17:39:05.395467 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d60d48419f13d2767d7d3a519305c15e0e03aeff7165577f03a7294bcf4519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.395587 kubelet[2696]: E0527 17:39:05.395558 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d60d48419f13d2767d7d3a519305c15e0e03aeff7165577f03a7294bcf4519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7944959bbc-8rhtl" May 27 17:39:05.395587 kubelet[2696]: E0527 17:39:05.395579 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d60d48419f13d2767d7d3a519305c15e0e03aeff7165577f03a7294bcf4519a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7944959bbc-8rhtl" May 27 17:39:05.395810 kubelet[2696]: E0527 17:39:05.395765 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f319ec1cc1220b17be30386e8913b5228c7a624036d2e128c50c022ea7fe1b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.395810 kubelet[2696]: E0527 17:39:05.395802 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f319ec1cc1220b17be30386e8913b5228c7a624036d2e128c50c022ea7fe1b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6888b85474-tvsvp" May 27 17:39:05.395886 kubelet[2696]: E0527 17:39:05.395817 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f319ec1cc1220b17be30386e8913b5228c7a624036d2e128c50c022ea7fe1b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6888b85474-tvsvp" May 27 17:39:05.395886 kubelet[2696]: E0527 17:39:05.395865 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6888b85474-tvsvp_calico-apiserver(a73d802b-0827-4fc9-87c5-8c54c8267e43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6888b85474-tvsvp_calico-apiserver(a73d802b-0827-4fc9-87c5-8c54c8267e43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f319ec1cc1220b17be30386e8913b5228c7a624036d2e128c50c022ea7fe1b1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6888b85474-tvsvp" podUID="a73d802b-0827-4fc9-87c5-8c54c8267e43" May 27 17:39:05.397013 kubelet[2696]: E0527 17:39:05.396978 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7944959bbc-8rhtl_calico-system(6b0dd1bc-0206-4f1e-9bbf-2f55ec102343)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7944959bbc-8rhtl_calico-system(6b0dd1bc-0206-4f1e-9bbf-2f55ec102343)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d60d48419f13d2767d7d3a519305c15e0e03aeff7165577f03a7294bcf4519a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7944959bbc-8rhtl" podUID="6b0dd1bc-0206-4f1e-9bbf-2f55ec102343" May 27 17:39:05.398445 containerd[1555]: time="2025-05-27T17:39:05.398286677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cnqdb,Uid:2c10f117-d5f4-4217-a561-a8842ec090ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c89fae670a675823a35cad1e3c53e478ec1b352a5de70c3e1c41377806c90ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.398990 kubelet[2696]: E0527 17:39:05.398875 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c89fae670a675823a35cad1e3c53e478ec1b352a5de70c3e1c41377806c90ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.398990 kubelet[2696]: E0527 17:39:05.398951 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c89fae670a675823a35cad1e3c53e478ec1b352a5de70c3e1c41377806c90ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cnqdb" May 27 17:39:05.398990 kubelet[2696]: E0527 17:39:05.398973 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c89fae670a675823a35cad1e3c53e478ec1b352a5de70c3e1c41377806c90ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cnqdb" May 27 17:39:05.399115 kubelet[2696]: E0527 17:39:05.399026 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cnqdb_kube-system(2c10f117-d5f4-4217-a561-a8842ec090ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cnqdb_kube-system(2c10f117-d5f4-4217-a561-a8842ec090ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c89fae670a675823a35cad1e3c53e478ec1b352a5de70c3e1c41377806c90ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cnqdb" podUID="2c10f117-d5f4-4217-a561-a8842ec090ba" May 27 17:39:05.406938 containerd[1555]: time="2025-05-27T17:39:05.406865577Z" level=error msg="Failed to destroy network for sandbox \"b861c90673dc822aac553fb894467d57452ebb00e04580150fd36e93f95c3d74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.408491 containerd[1555]: time="2025-05-27T17:39:05.408385597Z" level=error msg="Failed to destroy network for sandbox \"84a6b9279e08da45a32232209ecfea490788abdc4731d2365ff8d10be6929683\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.408658 containerd[1555]: time="2025-05-27T17:39:05.408556388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-7hcld,Uid:1e33f0f8-4ca3-40e9-893d-92f7065bb1f1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b861c90673dc822aac553fb894467d57452ebb00e04580150fd36e93f95c3d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.408849 kubelet[2696]: E0527 17:39:05.408799 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b861c90673dc822aac553fb894467d57452ebb00e04580150fd36e93f95c3d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.408969 kubelet[2696]: E0527 17:39:05.408867 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b861c90673dc822aac553fb894467d57452ebb00e04580150fd36e93f95c3d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77658bc8b9-7hcld" May 27 17:39:05.408969 kubelet[2696]: E0527 17:39:05.408886 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b861c90673dc822aac553fb894467d57452ebb00e04580150fd36e93f95c3d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77658bc8b9-7hcld" May 27 17:39:05.408969 kubelet[2696]: E0527 17:39:05.408939 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77658bc8b9-7hcld_calico-apiserver(1e33f0f8-4ca3-40e9-893d-92f7065bb1f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77658bc8b9-7hcld_calico-apiserver(1e33f0f8-4ca3-40e9-893d-92f7065bb1f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b861c90673dc822aac553fb894467d57452ebb00e04580150fd36e93f95c3d74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77658bc8b9-7hcld" podUID="1e33f0f8-4ca3-40e9-893d-92f7065bb1f1" May 27 17:39:05.412576 containerd[1555]: time="2025-05-27T17:39:05.412522800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-pns44,Uid:18982c18-ea20-425e-ae4b-4b49d57db0c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6b9279e08da45a32232209ecfea490788abdc4731d2365ff8d10be6929683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.413162 kubelet[2696]: E0527 17:39:05.413126 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6b9279e08da45a32232209ecfea490788abdc4731d2365ff8d10be6929683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.413248 kubelet[2696]: E0527 17:39:05.413204 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6b9279e08da45a32232209ecfea490788abdc4731d2365ff8d10be6929683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77658bc8b9-pns44" May 27 17:39:05.413248 kubelet[2696]: E0527 17:39:05.413234 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6b9279e08da45a32232209ecfea490788abdc4731d2365ff8d10be6929683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77658bc8b9-pns44" May 27 17:39:05.413319 kubelet[2696]: E0527 17:39:05.413288 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77658bc8b9-pns44_calico-apiserver(18982c18-ea20-425e-ae4b-4b49d57db0c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77658bc8b9-pns44_calico-apiserver(18982c18-ea20-425e-ae4b-4b49d57db0c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84a6b9279e08da45a32232209ecfea490788abdc4731d2365ff8d10be6929683\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77658bc8b9-pns44" podUID="18982c18-ea20-425e-ae4b-4b49d57db0c3" May 27 17:39:05.416272 containerd[1555]: time="2025-05-27T17:39:05.416148782Z" level=error msg="Failed to destroy network for sandbox \"4c6a153a90ffb427794393a5afacb574e80d2b920df511d8609c98dbe1cf8607\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.417866 containerd[1555]: time="2025-05-27T17:39:05.417836198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5nh2v,Uid:bc5e9290-4a3a-4633-af11-d46d40c33905,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6a153a90ffb427794393a5afacb574e80d2b920df511d8609c98dbe1cf8607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.418040 kubelet[2696]: E0527 17:39:05.418007 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6a153a90ffb427794393a5afacb574e80d2b920df511d8609c98dbe1cf8607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.418107 kubelet[2696]: E0527 17:39:05.418056 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6a153a90ffb427794393a5afacb574e80d2b920df511d8609c98dbe1cf8607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-5nh2v" May 27 17:39:05.418107 kubelet[2696]: E0527 17:39:05.418087 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6a153a90ffb427794393a5afacb574e80d2b920df511d8609c98dbe1cf8607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-5nh2v" May 27 17:39:05.418201 kubelet[2696]: E0527 17:39:05.418118 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-5nh2v_calico-system(bc5e9290-4a3a-4633-af11-d46d40c33905)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-5nh2v_calico-system(bc5e9290-4a3a-4633-af11-d46d40c33905)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c6a153a90ffb427794393a5afacb574e80d2b920df511d8609c98dbe1cf8607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-5nh2v" podUID="bc5e9290-4a3a-4633-af11-d46d40c33905" May 27 17:39:05.421971 containerd[1555]: time="2025-05-27T17:39:05.421913980Z" level=error msg="Failed to destroy network for sandbox \"9e915cf05de7953f7bf7b25f2745e1f565fb99135e36fd67159b05bd59481836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.423439 containerd[1555]: time="2025-05-27T17:39:05.423410706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c58l4,Uid:7cc74db5-ee17-4eec-9986-c451d99762ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e915cf05de7953f7bf7b25f2745e1f565fb99135e36fd67159b05bd59481836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.423649 kubelet[2696]: E0527 17:39:05.423576 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e915cf05de7953f7bf7b25f2745e1f565fb99135e36fd67159b05bd59481836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:39:05.423649 kubelet[2696]: E0527 17:39:05.423655 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e915cf05de7953f7bf7b25f2745e1f565fb99135e36fd67159b05bd59481836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c58l4" May 27 17:39:05.423831 kubelet[2696]: E0527 17:39:05.423673 2696 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e915cf05de7953f7bf7b25f2745e1f565fb99135e36fd67159b05bd59481836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c58l4" May 27 17:39:05.423831 kubelet[2696]: E0527 17:39:05.423715 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c58l4_kube-system(7cc74db5-ee17-4eec-9986-c451d99762ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c58l4_kube-system(7cc74db5-ee17-4eec-9986-c451d99762ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e915cf05de7953f7bf7b25f2745e1f565fb99135e36fd67159b05bd59481836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-c58l4" podUID="7cc74db5-ee17-4eec-9986-c451d99762ba" May 27 17:39:10.725817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555790331.mount: Deactivated successfully. May 27 17:39:11.482413 containerd[1555]: time="2025-05-27T17:39:11.482344186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:11.487469 containerd[1555]: time="2025-05-27T17:39:11.487424575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:39:11.489643 containerd[1555]: time="2025-05-27T17:39:11.489586088Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:11.522525 containerd[1555]: time="2025-05-27T17:39:11.522465694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:11.523081 containerd[1555]: time="2025-05-27T17:39:11.523038722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.380669549s" May 27 17:39:11.523142 containerd[1555]: time="2025-05-27T17:39:11.523085780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 17:39:11.532685 containerd[1555]: time="2025-05-27T17:39:11.532641731Z" level=info msg="CreateContainer within sandbox \"47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:39:11.553803 containerd[1555]: time="2025-05-27T17:39:11.553749092Z" level=info msg="Container 2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:11.575416 containerd[1555]: time="2025-05-27T17:39:11.575374616Z" level=info msg="CreateContainer within sandbox \"47897cbdc1e31732bf881228afe49cdbda54a145c442e6dd98d335b3f92826e4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad\"" May 27 17:39:11.576146 containerd[1555]: time="2025-05-27T17:39:11.576111701Z" level=info msg="StartContainer for \"2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad\"" May 27 17:39:11.577758 containerd[1555]: time="2025-05-27T17:39:11.577727679Z" level=info msg="connecting to shim 2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad" address="unix:///run/containerd/s/eb06ce6926e6d95ee7f84b4b58802e783b25660e43f31c17295a1110e8df7631" protocol=ttrpc version=3 May 27 17:39:11.604771 systemd[1]: Started cri-containerd-2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad.scope - libcontainer container 2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad. May 27 17:39:11.658199 containerd[1555]: time="2025-05-27T17:39:11.658142658Z" level=info msg="StartContainer for \"2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad\" returns successfully" May 27 17:39:11.737220 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:39:11.738076 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:39:11.920914 kubelet[2696]: I0527 17:39:11.920842 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp2gx\" (UniqueName: \"kubernetes.io/projected/b888eacd-de82-4555-a15d-4345439f3f57-kube-api-access-xp2gx\") pod \"b888eacd-de82-4555-a15d-4345439f3f57\" (UID: \"b888eacd-de82-4555-a15d-4345439f3f57\") " May 27 17:39:11.921661 kubelet[2696]: I0527 17:39:11.921016 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b888eacd-de82-4555-a15d-4345439f3f57-whisker-ca-bundle\") pod \"b888eacd-de82-4555-a15d-4345439f3f57\" (UID: \"b888eacd-de82-4555-a15d-4345439f3f57\") " May 27 17:39:11.921661 kubelet[2696]: I0527 17:39:11.921043 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b888eacd-de82-4555-a15d-4345439f3f57-whisker-backend-key-pair\") pod \"b888eacd-de82-4555-a15d-4345439f3f57\" (UID: \"b888eacd-de82-4555-a15d-4345439f3f57\") " May 27 17:39:11.923091 kubelet[2696]: I0527 17:39:11.923028 2696 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b888eacd-de82-4555-a15d-4345439f3f57-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b888eacd-de82-4555-a15d-4345439f3f57" (UID: "b888eacd-de82-4555-a15d-4345439f3f57"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:39:11.926627 kubelet[2696]: I0527 17:39:11.926397 2696 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b888eacd-de82-4555-a15d-4345439f3f57-kube-api-access-xp2gx" (OuterVolumeSpecName: "kube-api-access-xp2gx") pod "b888eacd-de82-4555-a15d-4345439f3f57" (UID: "b888eacd-de82-4555-a15d-4345439f3f57"). InnerVolumeSpecName "kube-api-access-xp2gx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:39:11.927817 kubelet[2696]: I0527 17:39:11.927750 2696 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b888eacd-de82-4555-a15d-4345439f3f57-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b888eacd-de82-4555-a15d-4345439f3f57" (UID: "b888eacd-de82-4555-a15d-4345439f3f57"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:39:11.927999 systemd[1]: var-lib-kubelet-pods-b888eacd\x2dde82\x2d4555\x2da15d\x2d4345439f3f57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxp2gx.mount: Deactivated successfully. May 27 17:39:11.931693 systemd[1]: var-lib-kubelet-pods-b888eacd\x2dde82\x2d4555\x2da15d\x2d4345439f3f57-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:39:12.022943 kubelet[2696]: I0527 17:39:12.022883 2696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xp2gx\" (UniqueName: \"kubernetes.io/projected/b888eacd-de82-4555-a15d-4345439f3f57-kube-api-access-xp2gx\") on node \"localhost\" DevicePath \"\"" May 27 17:39:12.022943 kubelet[2696]: I0527 17:39:12.022930 2696 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b888eacd-de82-4555-a15d-4345439f3f57-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 17:39:12.022943 kubelet[2696]: I0527 17:39:12.022945 2696 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b888eacd-de82-4555-a15d-4345439f3f57-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 17:39:12.077025 systemd[1]: Removed slice kubepods-besteffort-podb888eacd_de82_4555_a15d_4345439f3f57.slice - libcontainer container kubepods-besteffort-podb888eacd_de82_4555_a15d_4345439f3f57.slice. May 27 17:39:12.177734 kubelet[2696]: I0527 17:39:12.177547 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7ngmw" podStartSLOduration=1.481170485 podStartE2EDuration="17.177521367s" podCreationTimestamp="2025-05-27 17:38:55 +0000 UTC" firstStartedPulling="2025-05-27 17:38:55.827362298 +0000 UTC m=+21.894148943" lastFinishedPulling="2025-05-27 17:39:11.52371318 +0000 UTC m=+37.590499825" observedRunningTime="2025-05-27 17:39:12.176954892 +0000 UTC m=+38.243741557" watchObservedRunningTime="2025-05-27 17:39:12.177521367 +0000 UTC m=+38.244308012" May 27 17:39:12.234071 systemd[1]: Created slice kubepods-besteffort-pod97287dcd_fd61_4753_a782_d95c978e039a.slice - libcontainer container kubepods-besteffort-pod97287dcd_fd61_4753_a782_d95c978e039a.slice. May 27 17:39:12.307937 containerd[1555]: time="2025-05-27T17:39:12.307891743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad\" id:\"2eb1ab9d2091419b97f3274c6cc408c5416bdc3e3e16c542b01133de20e8f00d\" pid:3872 exit_status:1 exited_at:{seconds:1748367552 nanos:307505075}" May 27 17:39:12.325247 kubelet[2696]: I0527 17:39:12.325193 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr22r\" (UniqueName: \"kubernetes.io/projected/97287dcd-fd61-4753-a782-d95c978e039a-kube-api-access-wr22r\") pod \"whisker-68649cd6d-qn77g\" (UID: \"97287dcd-fd61-4753-a782-d95c978e039a\") " pod="calico-system/whisker-68649cd6d-qn77g" May 27 17:39:12.325247 kubelet[2696]: I0527 17:39:12.325242 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97287dcd-fd61-4753-a782-d95c978e039a-whisker-backend-key-pair\") pod \"whisker-68649cd6d-qn77g\" (UID: \"97287dcd-fd61-4753-a782-d95c978e039a\") " pod="calico-system/whisker-68649cd6d-qn77g" May 27 17:39:12.325247 kubelet[2696]: I0527 17:39:12.325260 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97287dcd-fd61-4753-a782-d95c978e039a-whisker-ca-bundle\") pod \"whisker-68649cd6d-qn77g\" (UID: \"97287dcd-fd61-4753-a782-d95c978e039a\") " pod="calico-system/whisker-68649cd6d-qn77g" May 27 17:39:12.538688 containerd[1555]: time="2025-05-27T17:39:12.538524864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68649cd6d-qn77g,Uid:97287dcd-fd61-4753-a782-d95c978e039a,Namespace:calico-system,Attempt:0,}" May 27 17:39:12.682077 systemd-networkd[1486]: cali4a80b323617: Link UP May 27 17:39:12.683950 systemd-networkd[1486]: cali4a80b323617: Gained carrier May 27 17:39:12.698967 containerd[1555]: 2025-05-27 17:39:12.561 [INFO][3886] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:39:12.698967 containerd[1555]: 2025-05-27 17:39:12.578 [INFO][3886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--68649cd6d--qn77g-eth0 whisker-68649cd6d- calico-system 97287dcd-fd61-4753-a782-d95c978e039a 947 0 2025-05-27 17:39:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68649cd6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-68649cd6d-qn77g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4a80b323617 [] [] }} ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-" May 27 17:39:12.698967 containerd[1555]: 2025-05-27 17:39:12.578 [INFO][3886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-eth0" May 27 17:39:12.698967 containerd[1555]: 2025-05-27 17:39:12.639 [INFO][3901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" HandleID="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Workload="localhost-k8s-whisker--68649cd6d--qn77g-eth0" May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.639 [INFO][3901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" HandleID="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Workload="localhost-k8s-whisker--68649cd6d--qn77g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a0e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-68649cd6d-qn77g", "timestamp":"2025-05-27 17:39:12.639355879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.640 [INFO][3901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.640 [INFO][3901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.640 [INFO][3901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.647 [INFO][3901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" host="localhost" May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.653 [INFO][3901] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.657 [INFO][3901] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.659 [INFO][3901] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.661 [INFO][3901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:12.699277 containerd[1555]: 2025-05-27 17:39:12.661 [INFO][3901] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" host="localhost" May 27 17:39:12.699533 containerd[1555]: 2025-05-27 17:39:12.663 [INFO][3901] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590 May 27 17:39:12.699533 containerd[1555]: 2025-05-27 17:39:12.666 [INFO][3901] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" host="localhost" May 27 17:39:12.699533 containerd[1555]: 2025-05-27 17:39:12.670 [INFO][3901] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" host="localhost" May 27 17:39:12.699533 containerd[1555]: 2025-05-27 17:39:12.670 [INFO][3901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" host="localhost" May 27 17:39:12.699533 containerd[1555]: 2025-05-27 17:39:12.670 [INFO][3901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:12.699533 containerd[1555]: 2025-05-27 17:39:12.670 [INFO][3901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" HandleID="k8s-pod-network.28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Workload="localhost-k8s-whisker--68649cd6d--qn77g-eth0" May 27 17:39:12.699722 containerd[1555]: 2025-05-27 17:39:12.673 [INFO][3886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68649cd6d--qn77g-eth0", GenerateName:"whisker-68649cd6d-", Namespace:"calico-system", SelfLink:"", UID:"97287dcd-fd61-4753-a782-d95c978e039a", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 39, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68649cd6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-68649cd6d-qn77g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a80b323617", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:12.699722 containerd[1555]: 2025-05-27 17:39:12.674 [INFO][3886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-eth0" May 27 17:39:12.699816 containerd[1555]: 2025-05-27 17:39:12.674 [INFO][3886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a80b323617 ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-eth0" May 27 17:39:12.699816 containerd[1555]: 2025-05-27 17:39:12.683 [INFO][3886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-eth0" May 27 17:39:12.699876 containerd[1555]: 2025-05-27 17:39:12.683 [INFO][3886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68649cd6d--qn77g-eth0", GenerateName:"whisker-68649cd6d-", Namespace:"calico-system", SelfLink:"", UID:"97287dcd-fd61-4753-a782-d95c978e039a", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 39, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68649cd6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590", Pod:"whisker-68649cd6d-qn77g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a80b323617", MAC:"02:3f:8e:3e:5a:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:12.699945 containerd[1555]: 2025-05-27 17:39:12.695 [INFO][3886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" Namespace="calico-system" Pod="whisker-68649cd6d-qn77g" WorkloadEndpoint="localhost-k8s-whisker--68649cd6d--qn77g-eth0" May 27 17:39:12.779141 containerd[1555]: time="2025-05-27T17:39:12.779043123Z" level=info msg="connecting to shim 28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590" address="unix:///run/containerd/s/ed5ab300c9f620a5f49311f19c163ee330d7603589f38d7c6b3e44a6cd3e6777" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:12.813916 systemd[1]: Started cri-containerd-28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590.scope - libcontainer container 28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590. May 27 17:39:12.828095 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:13.107027 containerd[1555]: time="2025-05-27T17:39:13.106499402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68649cd6d-qn77g,Uid:97287dcd-fd61-4753-a782-d95c978e039a,Namespace:calico-system,Attempt:0,} returns sandbox id \"28eb106e67a0e419110809c36759ae80ccb79181f03ede0cbce920f5891d1590\"" May 27 17:39:13.109376 containerd[1555]: time="2025-05-27T17:39:13.109345050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:39:13.261738 containerd[1555]: time="2025-05-27T17:39:13.261649437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad\" id:\"c1052335da3020c4f3f77af09840ad4cfb1d61de7fbeb893e473a07650878e3f\" pid:4076 exit_status:1 exited_at:{seconds:1748367553 nanos:261259475}" May 27 17:39:13.351451 containerd[1555]: time="2025-05-27T17:39:13.351392864Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:13.655524 containerd[1555]: time="2025-05-27T17:39:13.655464931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:39:13.662658 containerd[1555]: time="2025-05-27T17:39:13.662577245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:13.662906 kubelet[2696]: E0527 17:39:13.662861 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:39:13.663292 kubelet[2696]: E0527 17:39:13.662919 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:39:13.666655 kubelet[2696]: E0527 17:39:13.666614 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c57e48272564815bb33a455bb42c0db,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wr22r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68649cd6d-qn77g_calico-system(97287dcd-fd61-4753-a782-d95c978e039a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:13.668652 containerd[1555]: time="2025-05-27T17:39:13.668616102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:39:13.930800 containerd[1555]: time="2025-05-27T17:39:13.930746301Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:13.931947 containerd[1555]: time="2025-05-27T17:39:13.931902353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:13.932053 containerd[1555]: time="2025-05-27T17:39:13.931989797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:39:13.932235 kubelet[2696]: E0527 17:39:13.932168 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:39:13.932235 kubelet[2696]: E0527 17:39:13.932227 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:39:13.932410 kubelet[2696]: E0527 17:39:13.932349 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr22r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68649cd6d-qn77g_calico-system(97287dcd-fd61-4753-a782-d95c978e039a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:13.933588 kubelet[2696]: E0527 17:39:13.933540 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-68649cd6d-qn77g" podUID="97287dcd-fd61-4753-a782-d95c978e039a" May 27 17:39:14.052015 kubelet[2696]: I0527 17:39:14.051963 2696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b888eacd-de82-4555-a15d-4345439f3f57" path="/var/lib/kubelet/pods/b888eacd-de82-4555-a15d-4345439f3f57/volumes" May 27 17:39:14.168502 kubelet[2696]: E0527 17:39:14.168453 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-68649cd6d-qn77g" podUID="97287dcd-fd61-4753-a782-d95c978e039a" May 27 17:39:14.460769 systemd-networkd[1486]: cali4a80b323617: Gained IPv6LL May 27 17:39:17.050736 containerd[1555]: time="2025-05-27T17:39:17.050683383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-7hcld,Uid:1e33f0f8-4ca3-40e9-893d-92f7065bb1f1,Namespace:calico-apiserver,Attempt:0,}" May 27 17:39:17.184156 systemd-networkd[1486]: cali438030f4b88: Link UP May 27 17:39:17.184800 systemd-networkd[1486]: cali438030f4b88: Gained carrier May 27 17:39:17.198043 containerd[1555]: 2025-05-27 17:39:17.115 [INFO][4167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:39:17.198043 containerd[1555]: 2025-05-27 17:39:17.125 [INFO][4167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0 calico-apiserver-77658bc8b9- calico-apiserver 1e33f0f8-4ca3-40e9-893d-92f7065bb1f1 889 0 2025-05-27 17:38:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77658bc8b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77658bc8b9-7hcld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali438030f4b88 [] [] }} ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-" May 27 17:39:17.198043 containerd[1555]: 2025-05-27 17:39:17.125 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:17.198043 containerd[1555]: 2025-05-27 17:39:17.151 [INFO][4183] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" HandleID="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Workload="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.151 [INFO][4183] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" HandleID="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Workload="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77658bc8b9-7hcld", "timestamp":"2025-05-27 17:39:17.151006786 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.151 [INFO][4183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.151 [INFO][4183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.151 [INFO][4183] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.157 [INFO][4183] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" host="localhost" May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.161 [INFO][4183] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.165 [INFO][4183] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.166 [INFO][4183] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.168 [INFO][4183] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:17.198309 containerd[1555]: 2025-05-27 17:39:17.168 [INFO][4183] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" host="localhost" May 27 17:39:17.198800 containerd[1555]: 2025-05-27 17:39:17.170 [INFO][4183] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d May 27 17:39:17.198800 containerd[1555]: 2025-05-27 17:39:17.174 [INFO][4183] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" host="localhost" May 27 17:39:17.198800 containerd[1555]: 2025-05-27 17:39:17.179 [INFO][4183] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" host="localhost" May 27 17:39:17.198800 containerd[1555]: 2025-05-27 17:39:17.179 [INFO][4183] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" host="localhost" May 27 17:39:17.198800 containerd[1555]: 2025-05-27 17:39:17.179 [INFO][4183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:17.198800 containerd[1555]: 2025-05-27 17:39:17.179 [INFO][4183] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" HandleID="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Workload="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:17.199056 containerd[1555]: 2025-05-27 17:39:17.182 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0", GenerateName:"calico-apiserver-77658bc8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77658bc8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77658bc8b9-7hcld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali438030f4b88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:17.199130 containerd[1555]: 2025-05-27 17:39:17.182 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:17.199130 containerd[1555]: 2025-05-27 17:39:17.182 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali438030f4b88 ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:17.199130 containerd[1555]: 2025-05-27 17:39:17.185 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:17.199220 containerd[1555]: 2025-05-27 17:39:17.185 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0", GenerateName:"calico-apiserver-77658bc8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77658bc8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d", Pod:"calico-apiserver-77658bc8b9-7hcld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali438030f4b88", MAC:"9a:74:93:17:0e:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:17.199288 containerd[1555]: 2025-05-27 17:39:17.193 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-7hcld" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:17.262848 containerd[1555]: time="2025-05-27T17:39:17.262781768Z" level=info msg="connecting to shim 350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" address="unix:///run/containerd/s/55aab45c42504f644edc183a2260a37b39aa1223f534395535ac622390b4ffd3" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:17.307847 systemd[1]: Started cri-containerd-350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d.scope - libcontainer container 350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d. May 27 17:39:17.321776 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:17.354761 containerd[1555]: time="2025-05-27T17:39:17.354704764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-7hcld,Uid:1e33f0f8-4ca3-40e9-893d-92f7065bb1f1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\"" May 27 17:39:17.359056 containerd[1555]: time="2025-05-27T17:39:17.358257076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:39:17.626248 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:47492.service - OpenSSH per-connection server daemon (10.0.0.1:47492). May 27 17:39:17.674009 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 47492 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:17.675547 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:17.680143 systemd-logind[1539]: New session 8 of user core. May 27 17:39:17.694821 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:39:17.830978 sshd[4272]: Connection closed by 10.0.0.1 port 47492 May 27 17:39:17.831310 sshd-session[4270]: pam_unix(sshd:session): session closed for user core May 27 17:39:17.835261 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:47492.service: Deactivated successfully. May 27 17:39:17.837303 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:39:17.838234 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. May 27 17:39:17.839375 systemd-logind[1539]: Removed session 8. May 27 17:39:18.364771 systemd-networkd[1486]: cali438030f4b88: Gained IPv6LL May 27 17:39:19.050540 kubelet[2696]: E0527 17:39:19.050482 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:19.051787 kubelet[2696]: E0527 17:39:19.050483 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:19.051833 containerd[1555]: time="2025-05-27T17:39:19.050982914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cnqdb,Uid:2c10f117-d5f4-4217-a561-a8842ec090ba,Namespace:kube-system,Attempt:0,}" May 27 17:39:19.051833 containerd[1555]: time="2025-05-27T17:39:19.051381613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944959bbc-8rhtl,Uid:6b0dd1bc-0206-4f1e-9bbf-2f55ec102343,Namespace:calico-system,Attempt:0,}" May 27 17:39:19.051833 containerd[1555]: time="2025-05-27T17:39:19.051398935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c58l4,Uid:7cc74db5-ee17-4eec-9986-c451d99762ba,Namespace:kube-system,Attempt:0,}" May 27 17:39:19.051833 containerd[1555]: time="2025-05-27T17:39:19.051696284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-pns44,Uid:18982c18-ea20-425e-ae4b-4b49d57db0c3,Namespace:calico-apiserver,Attempt:0,}" May 27 17:39:19.052490 containerd[1555]: time="2025-05-27T17:39:19.052417158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5nh2v,Uid:bc5e9290-4a3a-4633-af11-d46d40c33905,Namespace:calico-system,Attempt:0,}" May 27 17:39:19.162201 kubelet[2696]: I0527 17:39:19.161658 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:39:19.163212 kubelet[2696]: E0527 17:39:19.163191 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:19.187878 kubelet[2696]: E0527 17:39:19.187081 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:19.328570 systemd-networkd[1486]: cali9fe818f4df1: Link UP May 27 17:39:19.331984 systemd-networkd[1486]: cali9fe818f4df1: Gained carrier May 27 17:39:19.345290 containerd[1555]: 2025-05-27 17:39:19.201 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:39:19.345290 containerd[1555]: 2025-05-27 17:39:19.225 [INFO][4328] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0 calico-kube-controllers-7944959bbc- calico-system 6b0dd1bc-0206-4f1e-9bbf-2f55ec102343 881 0 2025-05-27 17:38:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7944959bbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7944959bbc-8rhtl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9fe818f4df1 [] [] }} ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-" May 27 17:39:19.345290 containerd[1555]: 2025-05-27 17:39:19.225 [INFO][4328] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" May 27 17:39:19.345290 containerd[1555]: 2025-05-27 17:39:19.267 [INFO][4391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" HandleID="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Workload="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.267 [INFO][4391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" HandleID="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Workload="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7944959bbc-8rhtl", "timestamp":"2025-05-27 17:39:19.267169534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.267 [INFO][4391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.267 [INFO][4391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.267 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.276 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" host="localhost" May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.287 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.293 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.296 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.299 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.345563 containerd[1555]: 2025-05-27 17:39:19.299 [INFO][4391] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" host="localhost" May 27 17:39:19.345800 containerd[1555]: 2025-05-27 17:39:19.300 [INFO][4391] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff May 27 17:39:19.345800 containerd[1555]: 2025-05-27 17:39:19.307 [INFO][4391] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" host="localhost" May 27 17:39:19.345800 containerd[1555]: 2025-05-27 17:39:19.315 [INFO][4391] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" host="localhost" May 27 17:39:19.345800 containerd[1555]: 2025-05-27 17:39:19.315 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" host="localhost" May 27 17:39:19.345800 containerd[1555]: 2025-05-27 17:39:19.315 [INFO][4391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:19.345800 containerd[1555]: 2025-05-27 17:39:19.315 [INFO][4391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" HandleID="k8s-pod-network.240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Workload="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" May 27 17:39:19.345933 containerd[1555]: 2025-05-27 17:39:19.322 [INFO][4328] cni-plugin/k8s.go 418: Populated endpoint ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0", GenerateName:"calico-kube-controllers-7944959bbc-", Namespace:"calico-system", SelfLink:"", UID:"6b0dd1bc-0206-4f1e-9bbf-2f55ec102343", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7944959bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7944959bbc-8rhtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fe818f4df1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.345989 containerd[1555]: 2025-05-27 17:39:19.323 [INFO][4328] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" May 27 17:39:19.345989 containerd[1555]: 2025-05-27 17:39:19.323 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fe818f4df1 ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" May 27 17:39:19.345989 containerd[1555]: 2025-05-27 17:39:19.332 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" May 27 17:39:19.346052 containerd[1555]: 2025-05-27 17:39:19.333 [INFO][4328] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0", GenerateName:"calico-kube-controllers-7944959bbc-", Namespace:"calico-system", SelfLink:"", UID:"6b0dd1bc-0206-4f1e-9bbf-2f55ec102343", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7944959bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff", Pod:"calico-kube-controllers-7944959bbc-8rhtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fe818f4df1", MAC:"76:56:f7:3d:16:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.346104 containerd[1555]: 2025-05-27 17:39:19.341 [INFO][4328] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" Namespace="calico-system" Pod="calico-kube-controllers-7944959bbc-8rhtl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7944959bbc--8rhtl-eth0" May 27 17:39:19.386571 containerd[1555]: time="2025-05-27T17:39:19.386518059Z" level=info msg="connecting to shim 240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff" address="unix:///run/containerd/s/9a13741a516f1eeafca6420ced9eafcdda099c9a7da8c337168f5213be7d102a" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:19.419789 systemd[1]: Started cri-containerd-240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff.scope - libcontainer container 240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff. May 27 17:39:19.436890 systemd-networkd[1486]: calieced2dbd7f0: Link UP May 27 17:39:19.438067 systemd-networkd[1486]: calieced2dbd7f0: Gained carrier May 27 17:39:19.442823 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:19.458042 containerd[1555]: 2025-05-27 17:39:19.205 [INFO][4332] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:39:19.458042 containerd[1555]: 2025-05-27 17:39:19.238 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--c58l4-eth0 coredns-668d6bf9bc- kube-system 7cc74db5-ee17-4eec-9986-c451d99762ba 886 0 2025-05-27 17:38:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-c58l4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieced2dbd7f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-" May 27 17:39:19.458042 containerd[1555]: 2025-05-27 17:39:19.239 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" May 27 17:39:19.458042 containerd[1555]: 2025-05-27 17:39:19.296 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" HandleID="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Workload="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.296 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" HandleID="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Workload="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000427b30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-c58l4", "timestamp":"2025-05-27 17:39:19.296402869 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.296 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.315 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.316 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.376 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" host="localhost" May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.386 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.395 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.401 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.404 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.458318 containerd[1555]: 2025-05-27 17:39:19.404 [INFO][4398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" host="localhost" May 27 17:39:19.458840 containerd[1555]: 2025-05-27 17:39:19.405 [INFO][4398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f May 27 17:39:19.458840 containerd[1555]: 2025-05-27 17:39:19.411 [INFO][4398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" host="localhost" May 27 17:39:19.458840 containerd[1555]: 2025-05-27 17:39:19.424 [INFO][4398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" host="localhost" May 27 17:39:19.458840 containerd[1555]: 2025-05-27 17:39:19.424 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" host="localhost" May 27 17:39:19.458840 containerd[1555]: 2025-05-27 17:39:19.424 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:19.458840 containerd[1555]: 2025-05-27 17:39:19.424 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" HandleID="k8s-pod-network.ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Workload="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" May 27 17:39:19.458982 containerd[1555]: 2025-05-27 17:39:19.429 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--c58l4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cc74db5-ee17-4eec-9986-c451d99762ba", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-c58l4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieced2dbd7f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.459065 containerd[1555]: 2025-05-27 17:39:19.429 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" May 27 17:39:19.459065 containerd[1555]: 2025-05-27 17:39:19.429 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieced2dbd7f0 ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" May 27 17:39:19.459065 containerd[1555]: 2025-05-27 17:39:19.438 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" May 27 17:39:19.459145 containerd[1555]: 2025-05-27 17:39:19.442 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--c58l4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cc74db5-ee17-4eec-9986-c451d99762ba", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f", Pod:"coredns-668d6bf9bc-c58l4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieced2dbd7f0", MAC:"ee:cd:e8:58:9a:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.459145 containerd[1555]: 2025-05-27 17:39:19.452 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" Namespace="kube-system" Pod="coredns-668d6bf9bc-c58l4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c58l4-eth0" May 27 17:39:19.493249 containerd[1555]: time="2025-05-27T17:39:19.493202594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944959bbc-8rhtl,Uid:6b0dd1bc-0206-4f1e-9bbf-2f55ec102343,Namespace:calico-system,Attempt:0,} returns sandbox id \"240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff\"" May 27 17:39:19.501044 containerd[1555]: time="2025-05-27T17:39:19.500970510Z" level=info msg="connecting to shim ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f" address="unix:///run/containerd/s/54703ef1e7843667170afb48676fd04e9be64af653321a50fdaf506a8c294b91" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:19.535023 systemd-networkd[1486]: cali0abc2e3ab40: Link UP May 27 17:39:19.535940 systemd-networkd[1486]: cali0abc2e3ab40: Gained carrier May 27 17:39:19.545950 systemd[1]: Started cri-containerd-ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f.scope - libcontainer container ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f. May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.209 [INFO][4373] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.243 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0 coredns-668d6bf9bc- kube-system 2c10f117-d5f4-4217-a561-a8842ec090ba 875 0 2025-05-27 17:38:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-cnqdb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0abc2e3ab40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.244 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.307 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" HandleID="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Workload="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.307 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" HandleID="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Workload="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e950), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-cnqdb", "timestamp":"2025-05-27 17:39:19.307259817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.307 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.424 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.427 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.477 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.488 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.497 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.500 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.504 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.504 [INFO][4408] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.508 [INFO][4408] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.514 [INFO][4408] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.527 [INFO][4408] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.527 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" host="localhost" May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.528 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:19.559348 containerd[1555]: 2025-05-27 17:39:19.528 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" HandleID="k8s-pod-network.61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Workload="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" May 27 17:39:19.560691 containerd[1555]: 2025-05-27 17:39:19.533 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c10f117-d5f4-4217-a561-a8842ec090ba", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-cnqdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0abc2e3ab40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.560691 containerd[1555]: 2025-05-27 17:39:19.533 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" May 27 17:39:19.560691 containerd[1555]: 2025-05-27 17:39:19.533 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0abc2e3ab40 ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" May 27 17:39:19.560691 containerd[1555]: 2025-05-27 17:39:19.537 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" May 27 17:39:19.560691 containerd[1555]: 2025-05-27 17:39:19.537 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c10f117-d5f4-4217-a561-a8842ec090ba", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc", Pod:"coredns-668d6bf9bc-cnqdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0abc2e3ab40", MAC:"42:e5:0e:ce:b5:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.560691 containerd[1555]: 2025-05-27 17:39:19.550 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-cnqdb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cnqdb-eth0" May 27 17:39:19.573480 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:19.599061 containerd[1555]: time="2025-05-27T17:39:19.596764885Z" level=info msg="connecting to shim 61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc" address="unix:///run/containerd/s/595452fbebee5ce2f3477abe1e53c731afd0f76e7d9754727ccd4266b693ec1f" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:19.664574 systemd[1]: Started cri-containerd-61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc.scope - libcontainer container 61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc. May 27 17:39:19.691348 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:19.706935 containerd[1555]: time="2025-05-27T17:39:19.706744277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c58l4,Uid:7cc74db5-ee17-4eec-9986-c451d99762ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f\"" May 27 17:39:19.709460 kubelet[2696]: E0527 17:39:19.709428 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:19.715197 containerd[1555]: time="2025-05-27T17:39:19.715098183Z" level=info msg="CreateContainer within sandbox \"ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:39:19.745621 containerd[1555]: time="2025-05-27T17:39:19.744889517Z" level=info msg="Container e884fe87d89fe37ab338aa92fbac0f48bac272eb32c4bfd6a8d6f4f7d7aeb0af: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:19.750324 containerd[1555]: time="2025-05-27T17:39:19.750283195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cnqdb,Uid:2c10f117-d5f4-4217-a561-a8842ec090ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc\"" May 27 17:39:19.751791 systemd-networkd[1486]: cali6584b4a4207: Link UP May 27 17:39:19.753827 kubelet[2696]: E0527 17:39:19.753168 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:19.755092 systemd-networkd[1486]: cali6584b4a4207: Gained carrier May 27 17:39:19.765290 containerd[1555]: time="2025-05-27T17:39:19.765237196Z" level=info msg="CreateContainer within sandbox \"61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.210 [INFO][4356] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.246 [INFO][4356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0 goldmane-78d55f7ddc- calico-system bc5e9290-4a3a-4633-af11-d46d40c33905 888 0 2025-05-27 17:38:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-5nh2v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6584b4a4207 [] [] }} ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.246 [INFO][4356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.313 [INFO][4407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" HandleID="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Workload="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.314 [INFO][4407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" HandleID="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Workload="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000591e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-5nh2v", "timestamp":"2025-05-27 17:39:19.313692025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.314 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.528 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.528 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.581 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.589 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.624 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.626 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.630 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.630 [INFO][4407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.632 [INFO][4407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134 May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.640 [INFO][4407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.710 [INFO][4407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.712 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" host="localhost" May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.713 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:19.783093 containerd[1555]: 2025-05-27 17:39:19.713 [INFO][4407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" HandleID="k8s-pod-network.4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Workload="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" May 27 17:39:19.783720 containerd[1555]: 2025-05-27 17:39:19.739 [INFO][4356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"bc5e9290-4a3a-4633-af11-d46d40c33905", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-5nh2v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6584b4a4207", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.783720 containerd[1555]: 2025-05-27 17:39:19.739 [INFO][4356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" May 27 17:39:19.783720 containerd[1555]: 2025-05-27 17:39:19.739 [INFO][4356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6584b4a4207 ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" May 27 17:39:19.783720 containerd[1555]: 2025-05-27 17:39:19.754 [INFO][4356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" May 27 17:39:19.783720 containerd[1555]: 2025-05-27 17:39:19.756 [INFO][4356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"bc5e9290-4a3a-4633-af11-d46d40c33905", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134", Pod:"goldmane-78d55f7ddc-5nh2v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6584b4a4207", MAC:"62:7c:91:5e:2c:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.783720 containerd[1555]: 2025-05-27 17:39:19.771 [INFO][4356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5nh2v" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5nh2v-eth0" May 27 17:39:19.787837 containerd[1555]: time="2025-05-27T17:39:19.787780979Z" level=info msg="Container 578d732e8fecd6d942138ed8c795c7cb52fb5ddb6fccc59818301cbb089f087c: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:19.808101 containerd[1555]: time="2025-05-27T17:39:19.807752382Z" level=info msg="CreateContainer within sandbox \"ca5f293e47715989eee521f8674b2dcb190f1f7e742cbcb713f3b44314f0860f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e884fe87d89fe37ab338aa92fbac0f48bac272eb32c4bfd6a8d6f4f7d7aeb0af\"" May 27 17:39:19.816411 systemd-networkd[1486]: calid20576dc7f5: Link UP May 27 17:39:19.817831 systemd-networkd[1486]: calid20576dc7f5: Gained carrier May 27 17:39:19.819998 containerd[1555]: time="2025-05-27T17:39:19.818449309Z" level=info msg="CreateContainer within sandbox \"61a6ba1fd3839a27388c1825b5f97193204fe6c092a9263a5cbbef516e2b69cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"578d732e8fecd6d942138ed8c795c7cb52fb5ddb6fccc59818301cbb089f087c\"" May 27 17:39:19.824175 containerd[1555]: time="2025-05-27T17:39:19.821095928Z" level=info msg="StartContainer for \"578d732e8fecd6d942138ed8c795c7cb52fb5ddb6fccc59818301cbb089f087c\"" May 27 17:39:19.824175 containerd[1555]: time="2025-05-27T17:39:19.823514049Z" level=info msg="StartContainer for \"e884fe87d89fe37ab338aa92fbac0f48bac272eb32c4bfd6a8d6f4f7d7aeb0af\"" May 27 17:39:19.825569 containerd[1555]: time="2025-05-27T17:39:19.825508885Z" level=info msg="connecting to shim e884fe87d89fe37ab338aa92fbac0f48bac272eb32c4bfd6a8d6f4f7d7aeb0af" address="unix:///run/containerd/s/54703ef1e7843667170afb48676fd04e9be64af653321a50fdaf506a8c294b91" protocol=ttrpc version=3 May 27 17:39:19.829796 containerd[1555]: time="2025-05-27T17:39:19.829769636Z" level=info msg="connecting to shim 578d732e8fecd6d942138ed8c795c7cb52fb5ddb6fccc59818301cbb089f087c" address="unix:///run/containerd/s/595452fbebee5ce2f3477abe1e53c731afd0f76e7d9754727ccd4266b693ec1f" protocol=ttrpc version=3 May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.219 [INFO][4312] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.247 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0 calico-apiserver-77658bc8b9- calico-apiserver 18982c18-ea20-425e-ae4b-4b49d57db0c3 884 0 2025-05-27 17:38:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77658bc8b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77658bc8b9-pns44 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid20576dc7f5 [] [] }} ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.247 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.329 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.329 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77658bc8b9-pns44", "timestamp":"2025-05-27 17:39:19.329645593 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.329 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.715 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.715 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.733 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.744 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.767 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.779 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.783 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.783 [INFO][4410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.786 [INFO][4410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.794 [INFO][4410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.804 [INFO][4410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.804 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" host="localhost" May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.804 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:19.842230 containerd[1555]: 2025-05-27 17:39:19.804 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:19.843084 containerd[1555]: 2025-05-27 17:39:19.809 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0", GenerateName:"calico-apiserver-77658bc8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"18982c18-ea20-425e-ae4b-4b49d57db0c3", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77658bc8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77658bc8b9-pns44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid20576dc7f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.843084 containerd[1555]: 2025-05-27 17:39:19.810 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:19.843084 containerd[1555]: 2025-05-27 17:39:19.810 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid20576dc7f5 ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:19.843084 containerd[1555]: 2025-05-27 17:39:19.818 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:19.843084 containerd[1555]: 2025-05-27 17:39:19.823 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0", GenerateName:"calico-apiserver-77658bc8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"18982c18-ea20-425e-ae4b-4b49d57db0c3", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77658bc8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e", Pod:"calico-apiserver-77658bc8b9-pns44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid20576dc7f5", MAC:"7e:1e:26:83:94:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:19.843084 containerd[1555]: 2025-05-27 17:39:19.837 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Namespace="calico-apiserver" Pod="calico-apiserver-77658bc8b9-pns44" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:19.856858 systemd[1]: Started cri-containerd-e884fe87d89fe37ab338aa92fbac0f48bac272eb32c4bfd6a8d6f4f7d7aeb0af.scope - libcontainer container e884fe87d89fe37ab338aa92fbac0f48bac272eb32c4bfd6a8d6f4f7d7aeb0af. May 27 17:39:19.861832 systemd[1]: Started cri-containerd-578d732e8fecd6d942138ed8c795c7cb52fb5ddb6fccc59818301cbb089f087c.scope - libcontainer container 578d732e8fecd6d942138ed8c795c7cb52fb5ddb6fccc59818301cbb089f087c. May 27 17:39:19.882495 containerd[1555]: time="2025-05-27T17:39:19.882444740Z" level=info msg="connecting to shim 4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134" address="unix:///run/containerd/s/aab92ecc49e611b15173e2e5395262588a0371b679462daba6921cef4c22b918" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:19.887931 containerd[1555]: time="2025-05-27T17:39:19.887742539Z" level=info msg="connecting to shim aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" address="unix:///run/containerd/s/6a603f4d00843fdb28ac919f0504f55d4f30a17060704b316e0e4171f0939589" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:19.912816 systemd[1]: Started cri-containerd-4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134.scope - libcontainer container 4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134. May 27 17:39:19.927921 systemd[1]: Started cri-containerd-aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e.scope - libcontainer container aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e. May 27 17:39:19.945670 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:19.948289 containerd[1555]: time="2025-05-27T17:39:19.948240994Z" level=info msg="StartContainer for \"e884fe87d89fe37ab338aa92fbac0f48bac272eb32c4bfd6a8d6f4f7d7aeb0af\" returns successfully" May 27 17:39:19.954096 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:19.954372 containerd[1555]: time="2025-05-27T17:39:19.954331481Z" level=info msg="StartContainer for \"578d732e8fecd6d942138ed8c795c7cb52fb5ddb6fccc59818301cbb089f087c\" returns successfully" May 27 17:39:20.011625 containerd[1555]: time="2025-05-27T17:39:20.011025009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77658bc8b9-pns44,Uid:18982c18-ea20-425e-ae4b-4b49d57db0c3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\"" May 27 17:39:20.022301 containerd[1555]: time="2025-05-27T17:39:20.022263061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5nh2v,Uid:bc5e9290-4a3a-4633-af11-d46d40c33905,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bb07055f6d2fdd82aa64872c50131680e9f38d51841ec0d0f0152694021e134\"" May 27 17:39:20.052069 containerd[1555]: time="2025-05-27T17:39:20.052015366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888b85474-tvsvp,Uid:a73d802b-0827-4fc9-87c5-8c54c8267e43,Namespace:calico-apiserver,Attempt:0,}" May 27 17:39:20.056070 containerd[1555]: time="2025-05-27T17:39:20.055215585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2xzb,Uid:1a8befa0-930c-44c3-a3e5-53b9fdc761fb,Namespace:calico-system,Attempt:0,}" May 27 17:39:20.192339 kubelet[2696]: E0527 17:39:20.192311 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:20.195993 kubelet[2696]: E0527 17:39:20.195917 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:20.213528 kubelet[2696]: I0527 17:39:20.213387 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c58l4" podStartSLOduration=40.212581476 podStartE2EDuration="40.212581476s" podCreationTimestamp="2025-05-27 17:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:39:20.208538424 +0000 UTC m=+46.275325069" watchObservedRunningTime="2025-05-27 17:39:20.212581476 +0000 UTC m=+46.279368121" May 27 17:39:20.227788 kubelet[2696]: I0527 17:39:20.227721 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cnqdb" podStartSLOduration=40.227700204 podStartE2EDuration="40.227700204s" podCreationTimestamp="2025-05-27 17:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:39:20.227557736 +0000 UTC m=+46.294344381" watchObservedRunningTime="2025-05-27 17:39:20.227700204 +0000 UTC m=+46.294486839" May 27 17:39:20.255873 systemd-networkd[1486]: cali825e4130159: Link UP May 27 17:39:20.257499 systemd-networkd[1486]: cali825e4130159: Gained carrier May 27 17:39:20.277981 systemd-networkd[1486]: vxlan.calico: Link UP May 27 17:39:20.278131 systemd-networkd[1486]: vxlan.calico: Gained carrier May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.144 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0 calico-apiserver-6888b85474- calico-apiserver a73d802b-0827-4fc9-87c5-8c54c8267e43 879 0 2025-05-27 17:38:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6888b85474 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6888b85474-tvsvp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali825e4130159 [] [] }} ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.144 [INFO][4822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.187 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" HandleID="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Workload="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.188 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" HandleID="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Workload="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af2e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6888b85474-tvsvp", "timestamp":"2025-05-27 17:39:20.187857102 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.188 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.188 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.188 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.198 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.212 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.224 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.228 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.233 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.233 [INFO][4876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.235 [INFO][4876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68 May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.239 [INFO][4876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.245 [INFO][4876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.245 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" host="localhost" May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.245 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:20.290351 containerd[1555]: 2025-05-27 17:39:20.245 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" HandleID="k8s-pod-network.c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Workload="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" May 27 17:39:20.290946 containerd[1555]: 2025-05-27 17:39:20.250 [INFO][4822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0", GenerateName:"calico-apiserver-6888b85474-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73d802b-0827-4fc9-87c5-8c54c8267e43", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6888b85474", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6888b85474-tvsvp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali825e4130159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:20.290946 containerd[1555]: 2025-05-27 17:39:20.250 [INFO][4822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" May 27 17:39:20.290946 containerd[1555]: 2025-05-27 17:39:20.250 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali825e4130159 ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" May 27 17:39:20.290946 containerd[1555]: 2025-05-27 17:39:20.258 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" May 27 17:39:20.290946 containerd[1555]: 2025-05-27 17:39:20.258 [INFO][4822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0", GenerateName:"calico-apiserver-6888b85474-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73d802b-0827-4fc9-87c5-8c54c8267e43", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6888b85474", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68", Pod:"calico-apiserver-6888b85474-tvsvp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali825e4130159", MAC:"fa:c9:5c:b6:1f:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:20.290946 containerd[1555]: 2025-05-27 17:39:20.278 [INFO][4822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-tvsvp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--tvsvp-eth0" May 27 17:39:20.343954 containerd[1555]: time="2025-05-27T17:39:20.343662302Z" level=info msg="connecting to shim c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68" address="unix:///run/containerd/s/7db99b46021ad62dae8230d05e1466623050bbd9c46357e8348f96eda377c26a" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:20.364286 systemd-networkd[1486]: cali2b69a9f9281: Link UP May 27 17:39:20.372004 systemd-networkd[1486]: cali2b69a9f9281: Gained carrier May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.127 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v2xzb-eth0 csi-node-driver- calico-system 1a8befa0-930c-44c3-a3e5-53b9fdc761fb 770 0 2025-05-27 17:38:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-v2xzb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2b69a9f9281 [] [] }} ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.130 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-eth0" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.228 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" HandleID="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Workload="localhost-k8s-csi--node--driver--v2xzb-eth0" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.229 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" HandleID="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Workload="localhost-k8s-csi--node--driver--v2xzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000577c80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v2xzb", "timestamp":"2025-05-27 17:39:20.228727793 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.229 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.246 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.247 [INFO][4880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.299 [INFO][4880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.310 [INFO][4880] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.325 [INFO][4880] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.327 [INFO][4880] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.330 [INFO][4880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.330 [INFO][4880] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.332 [INFO][4880] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31 May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.337 [INFO][4880] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.346 [INFO][4880] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.346 [INFO][4880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" host="localhost" May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.346 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:20.392521 containerd[1555]: 2025-05-27 17:39:20.346 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" HandleID="k8s-pod-network.209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Workload="localhost-k8s-csi--node--driver--v2xzb-eth0" May 27 17:39:20.393249 containerd[1555]: 2025-05-27 17:39:20.355 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v2xzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a8befa0-930c-44c3-a3e5-53b9fdc761fb", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v2xzb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b69a9f9281", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:20.393249 containerd[1555]: 2025-05-27 17:39:20.355 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-eth0" May 27 17:39:20.393249 containerd[1555]: 2025-05-27 17:39:20.355 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b69a9f9281 ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-eth0" May 27 17:39:20.393249 containerd[1555]: 2025-05-27 17:39:20.364 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-eth0" May 27 17:39:20.393249 containerd[1555]: 2025-05-27 17:39:20.371 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v2xzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a8befa0-930c-44c3-a3e5-53b9fdc761fb", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31", Pod:"csi-node-driver-v2xzb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b69a9f9281", MAC:"66:86:c2:29:89:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:20.393249 containerd[1555]: 2025-05-27 17:39:20.388 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" Namespace="calico-system" Pod="csi-node-driver-v2xzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2xzb-eth0" May 27 17:39:20.393883 systemd[1]: Started cri-containerd-c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68.scope - libcontainer container c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68. May 27 17:39:20.417488 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:20.500642 containerd[1555]: time="2025-05-27T17:39:20.498471010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888b85474-tvsvp,Uid:a73d802b-0827-4fc9-87c5-8c54c8267e43,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68\"" May 27 17:39:20.537921 containerd[1555]: time="2025-05-27T17:39:20.537787113Z" level=info msg="connecting to shim 209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31" address="unix:///run/containerd/s/6a3bea2c6c4c69a174200f9b5c1d53d804c5cd48a1affdca2019e752cb39ce19" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:20.578855 systemd[1]: Started cri-containerd-209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31.scope - libcontainer container 209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31. May 27 17:39:20.593113 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:20.605086 systemd-networkd[1486]: cali0abc2e3ab40: Gained IPv6LL May 27 17:39:20.627782 containerd[1555]: time="2025-05-27T17:39:20.627734856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2xzb,Uid:1a8befa0-930c-44c3-a3e5-53b9fdc761fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31\"" May 27 17:39:20.651143 containerd[1555]: time="2025-05-27T17:39:20.650550505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:20.651581 containerd[1555]: time="2025-05-27T17:39:20.651532650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 17:39:20.661219 containerd[1555]: time="2025-05-27T17:39:20.661175125Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:20.663704 containerd[1555]: time="2025-05-27T17:39:20.663658448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:20.664330 containerd[1555]: time="2025-05-27T17:39:20.664291446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.305966875s" May 27 17:39:20.664330 containerd[1555]: time="2025-05-27T17:39:20.664328736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:39:20.665555 containerd[1555]: time="2025-05-27T17:39:20.665191075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:39:20.666700 containerd[1555]: time="2025-05-27T17:39:20.666654794Z" level=info msg="CreateContainer within sandbox \"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:39:20.678388 containerd[1555]: time="2025-05-27T17:39:20.678341478Z" level=info msg="Container 693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:20.689850 containerd[1555]: time="2025-05-27T17:39:20.689725954Z" level=info msg="CreateContainer within sandbox \"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\"" May 27 17:39:20.691103 containerd[1555]: time="2025-05-27T17:39:20.690773340Z" level=info msg="StartContainer for \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\"" May 27 17:39:20.694159 containerd[1555]: time="2025-05-27T17:39:20.694067456Z" level=info msg="connecting to shim 693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7" address="unix:///run/containerd/s/55aab45c42504f644edc183a2260a37b39aa1223f534395535ac622390b4ffd3" protocol=ttrpc version=3 May 27 17:39:20.729789 systemd[1]: Started cri-containerd-693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7.scope - libcontainer container 693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7. May 27 17:39:20.780276 containerd[1555]: time="2025-05-27T17:39:20.780167903Z" level=info msg="StartContainer for \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" returns successfully" May 27 17:39:21.052788 systemd-networkd[1486]: cali9fe818f4df1: Gained IPv6LL May 27 17:39:21.180867 systemd-networkd[1486]: cali6584b4a4207: Gained IPv6LL May 27 17:39:21.211930 kubelet[2696]: E0527 17:39:21.211870 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:21.213711 kubelet[2696]: E0527 17:39:21.213652 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:21.223785 kubelet[2696]: I0527 17:39:21.223546 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77658bc8b9-7hcld" podStartSLOduration=25.916383338 podStartE2EDuration="29.223423258s" podCreationTimestamp="2025-05-27 17:38:52 +0000 UTC" firstStartedPulling="2025-05-27 17:39:17.358001515 +0000 UTC m=+43.424788160" lastFinishedPulling="2025-05-27 17:39:20.665041435 +0000 UTC m=+46.731828080" observedRunningTime="2025-05-27 17:39:21.223068091 +0000 UTC m=+47.289854736" watchObservedRunningTime="2025-05-27 17:39:21.223423258 +0000 UTC m=+47.290209903" May 27 17:39:21.372755 systemd-networkd[1486]: calieced2dbd7f0: Gained IPv6LL May 27 17:39:21.436757 systemd-networkd[1486]: vxlan.calico: Gained IPv6LL May 27 17:39:21.564810 systemd-networkd[1486]: calid20576dc7f5: Gained IPv6LL May 27 17:39:21.820781 systemd-networkd[1486]: cali2b69a9f9281: Gained IPv6LL May 27 17:39:22.205263 systemd-networkd[1486]: cali825e4130159: Gained IPv6LL May 27 17:39:22.215118 kubelet[2696]: E0527 17:39:22.214760 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:22.215118 kubelet[2696]: E0527 17:39:22.214774 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:22.847435 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:47504.service - OpenSSH per-connection server daemon (10.0.0.1:47504). May 27 17:39:22.920140 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 47504 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:22.922055 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:22.926674 systemd-logind[1539]: New session 9 of user core. May 27 17:39:22.940817 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:39:23.073695 sshd[5126]: Connection closed by 10.0.0.1 port 47504 May 27 17:39:23.074009 sshd-session[5124]: pam_unix(sshd:session): session closed for user core May 27 17:39:23.077904 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:47504.service: Deactivated successfully. May 27 17:39:23.080020 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:39:23.081079 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. May 27 17:39:23.082213 systemd-logind[1539]: Removed session 9. May 27 17:39:23.089514 containerd[1555]: time="2025-05-27T17:39:23.089461718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:23.090413 containerd[1555]: time="2025-05-27T17:39:23.090379721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 17:39:23.091776 containerd[1555]: time="2025-05-27T17:39:23.091744243Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:23.105048 containerd[1555]: time="2025-05-27T17:39:23.104863531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:23.105374 containerd[1555]: time="2025-05-27T17:39:23.105239758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.440020099s" May 27 17:39:23.105374 containerd[1555]: time="2025-05-27T17:39:23.105272790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 17:39:23.106352 containerd[1555]: time="2025-05-27T17:39:23.106306621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:39:23.115186 containerd[1555]: time="2025-05-27T17:39:23.115094507Z" level=info msg="CreateContainer within sandbox \"240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:39:23.125619 containerd[1555]: time="2025-05-27T17:39:23.125038775Z" level=info msg="Container 14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:23.140155 containerd[1555]: time="2025-05-27T17:39:23.140091684Z" level=info msg="CreateContainer within sandbox \"240cdc9beb38706b832212dc6829a1f0a358ec6f40e2998a0c77fed5687a1dff\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44\"" May 27 17:39:23.140804 containerd[1555]: time="2025-05-27T17:39:23.140781168Z" level=info msg="StartContainer for \"14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44\"" May 27 17:39:23.142052 containerd[1555]: time="2025-05-27T17:39:23.142026967Z" level=info msg="connecting to shim 14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44" address="unix:///run/containerd/s/9a13741a516f1eeafca6420ced9eafcdda099c9a7da8c337168f5213be7d102a" protocol=ttrpc version=3 May 27 17:39:23.202909 systemd[1]: Started cri-containerd-14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44.scope - libcontainer container 14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44. May 27 17:39:23.277005 containerd[1555]: time="2025-05-27T17:39:23.276968196Z" level=info msg="StartContainer for \"14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44\" returns successfully" May 27 17:39:23.464613 containerd[1555]: time="2025-05-27T17:39:23.464558893Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:23.465484 containerd[1555]: time="2025-05-27T17:39:23.465453151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:39:23.467108 containerd[1555]: time="2025-05-27T17:39:23.467065288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 360.718642ms" May 27 17:39:23.467108 containerd[1555]: time="2025-05-27T17:39:23.467104582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:39:23.468211 containerd[1555]: time="2025-05-27T17:39:23.468175663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:39:23.469168 containerd[1555]: time="2025-05-27T17:39:23.469133110Z" level=info msg="CreateContainer within sandbox \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:39:23.479160 containerd[1555]: time="2025-05-27T17:39:23.479120118Z" level=info msg="Container c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:23.490673 containerd[1555]: time="2025-05-27T17:39:23.490631478Z" level=info msg="CreateContainer within sandbox \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\"" May 27 17:39:23.491181 containerd[1555]: time="2025-05-27T17:39:23.491147897Z" level=info msg="StartContainer for \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\"" May 27 17:39:23.492290 containerd[1555]: time="2025-05-27T17:39:23.492145199Z" level=info msg="connecting to shim c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1" address="unix:///run/containerd/s/6a603f4d00843fdb28ac919f0504f55d4f30a17060704b316e0e4171f0939589" protocol=ttrpc version=3 May 27 17:39:23.513865 systemd[1]: Started cri-containerd-c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1.scope - libcontainer container c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1. May 27 17:39:23.709325 containerd[1555]: time="2025-05-27T17:39:23.709280579Z" level=info msg="StartContainer for \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" returns successfully" May 27 17:39:23.711410 containerd[1555]: time="2025-05-27T17:39:23.711372135Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:23.712950 containerd[1555]: time="2025-05-27T17:39:23.712800617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:23.713046 containerd[1555]: time="2025-05-27T17:39:23.712875918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:39:23.713142 kubelet[2696]: E0527 17:39:23.713105 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:39:23.713849 kubelet[2696]: E0527 17:39:23.713147 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:39:23.713960 containerd[1555]: time="2025-05-27T17:39:23.713549883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:39:23.717947 kubelet[2696]: E0527 17:39:23.717756 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfk8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5nh2v_calico-system(bc5e9290-4a3a-4633-af11-d46d40c33905): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:23.719779 kubelet[2696]: E0527 17:39:23.719737 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5nh2v" podUID="bc5e9290-4a3a-4633-af11-d46d40c33905" May 27 17:39:24.176129 containerd[1555]: time="2025-05-27T17:39:24.176055165Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:24.177652 containerd[1555]: time="2025-05-27T17:39:24.177016650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:39:24.179266 containerd[1555]: time="2025-05-27T17:39:24.179165134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 465.590653ms" May 27 17:39:24.179266 containerd[1555]: time="2025-05-27T17:39:24.179228322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:39:24.180225 containerd[1555]: time="2025-05-27T17:39:24.180197742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:39:24.181660 containerd[1555]: time="2025-05-27T17:39:24.181630942Z" level=info msg="CreateContainer within sandbox \"c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:39:24.192363 containerd[1555]: time="2025-05-27T17:39:24.192297014Z" level=info msg="Container d06a349280d338a5ae447a57fbc6ca2474278426a9364a6d84b839b95a3f4d49: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:24.203421 containerd[1555]: time="2025-05-27T17:39:24.203359359Z" level=info msg="CreateContainer within sandbox \"c3140b27b9192188e93602101eaf96333c1188923407450eef7eeb9aa8e62d68\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d06a349280d338a5ae447a57fbc6ca2474278426a9364a6d84b839b95a3f4d49\"" May 27 17:39:24.204119 containerd[1555]: time="2025-05-27T17:39:24.204049676Z" level=info msg="StartContainer for \"d06a349280d338a5ae447a57fbc6ca2474278426a9364a6d84b839b95a3f4d49\"" May 27 17:39:24.205626 containerd[1555]: time="2025-05-27T17:39:24.205567334Z" level=info msg="connecting to shim d06a349280d338a5ae447a57fbc6ca2474278426a9364a6d84b839b95a3f4d49" address="unix:///run/containerd/s/7db99b46021ad62dae8230d05e1466623050bbd9c46357e8348f96eda377c26a" protocol=ttrpc version=3 May 27 17:39:24.233875 systemd[1]: Started cri-containerd-d06a349280d338a5ae447a57fbc6ca2474278426a9364a6d84b839b95a3f4d49.scope - libcontainer container d06a349280d338a5ae447a57fbc6ca2474278426a9364a6d84b839b95a3f4d49. May 27 17:39:24.247328 kubelet[2696]: E0527 17:39:24.247263 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5nh2v" podUID="bc5e9290-4a3a-4633-af11-d46d40c33905" May 27 17:39:24.288175 kubelet[2696]: I0527 17:39:24.288056 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77658bc8b9-pns44" podStartSLOduration=28.833134289 podStartE2EDuration="32.287983892s" podCreationTimestamp="2025-05-27 17:38:52 +0000 UTC" firstStartedPulling="2025-05-27 17:39:20.013115304 +0000 UTC m=+46.079901939" lastFinishedPulling="2025-05-27 17:39:23.467964887 +0000 UTC m=+49.534751542" observedRunningTime="2025-05-27 17:39:24.254167604 +0000 UTC m=+50.320954249" watchObservedRunningTime="2025-05-27 17:39:24.287983892 +0000 UTC m=+50.354770537" May 27 17:39:24.304354 kubelet[2696]: I0527 17:39:24.303995 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7944959bbc-8rhtl" podStartSLOduration=25.696695149 podStartE2EDuration="29.30397544s" podCreationTimestamp="2025-05-27 17:38:55 +0000 UTC" firstStartedPulling="2025-05-27 17:39:19.498899902 +0000 UTC m=+45.565686537" lastFinishedPulling="2025-05-27 17:39:23.106180173 +0000 UTC m=+49.172966828" observedRunningTime="2025-05-27 17:39:24.287207604 +0000 UTC m=+50.353994249" watchObservedRunningTime="2025-05-27 17:39:24.30397544 +0000 UTC m=+50.370762085" May 27 17:39:24.336950 containerd[1555]: time="2025-05-27T17:39:24.336912718Z" level=info msg="StartContainer for \"d06a349280d338a5ae447a57fbc6ca2474278426a9364a6d84b839b95a3f4d49\" returns successfully" May 27 17:39:24.349762 containerd[1555]: time="2025-05-27T17:39:24.349712375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44\" id:\"4c04f97d445f64f01e7ec98fd082d2664b786d946382836e71298f7baa7d80b2\" pid:5252 exited_at:{seconds:1748367564 nanos:349309288}" May 27 17:39:25.255620 kubelet[2696]: I0527 17:39:25.255552 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:39:25.883814 kubelet[2696]: I0527 17:39:25.883715 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6888b85474-tvsvp" podStartSLOduration=29.220563964 podStartE2EDuration="32.883691412s" podCreationTimestamp="2025-05-27 17:38:53 +0000 UTC" firstStartedPulling="2025-05-27 17:39:20.516853637 +0000 UTC m=+46.583640282" lastFinishedPulling="2025-05-27 17:39:24.179981075 +0000 UTC m=+50.246767730" observedRunningTime="2025-05-27 17:39:25.274241205 +0000 UTC m=+51.341027851" watchObservedRunningTime="2025-05-27 17:39:25.883691412 +0000 UTC m=+51.950478067" May 27 17:39:26.286724 containerd[1555]: time="2025-05-27T17:39:26.286564637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:26.291986 kubelet[2696]: I0527 17:39:26.291833 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:39:26.305257 containerd[1555]: time="2025-05-27T17:39:26.305189434Z" level=info msg="StopContainer for \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" with timeout 30 (s)" May 27 17:39:26.313683 containerd[1555]: time="2025-05-27T17:39:26.313647650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 17:39:26.314727 containerd[1555]: time="2025-05-27T17:39:26.314693984Z" level=info msg="Stop container \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" with signal terminated" May 27 17:39:26.326955 containerd[1555]: time="2025-05-27T17:39:26.326905443Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:26.332791 containerd[1555]: time="2025-05-27T17:39:26.332667739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:26.335350 containerd[1555]: time="2025-05-27T17:39:26.335291935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.154978054s" May 27 17:39:26.335843 containerd[1555]: time="2025-05-27T17:39:26.335494866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 17:39:26.343469 containerd[1555]: time="2025-05-27T17:39:26.343430390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:39:26.346442 systemd[1]: cri-containerd-c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1.scope: Deactivated successfully. May 27 17:39:26.346796 containerd[1555]: time="2025-05-27T17:39:26.346747396Z" level=info msg="CreateContainer within sandbox \"209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:39:26.357651 containerd[1555]: time="2025-05-27T17:39:26.357575590Z" level=info msg="received exit event container_id:\"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" id:\"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" pid:5195 exit_status:1 exited_at:{seconds:1748367566 nanos:356278575}" May 27 17:39:26.357808 containerd[1555]: time="2025-05-27T17:39:26.357654358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" id:\"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" pid:5195 exit_status:1 exited_at:{seconds:1748367566 nanos:356278575}" May 27 17:39:26.373679 systemd[1]: Created slice kubepods-besteffort-pod1dfc711d_b974_4c03_89b9_9d1d28a9c1d3.slice - libcontainer container kubepods-besteffort-pod1dfc711d_b974_4c03_89b9_9d1d28a9c1d3.slice. May 27 17:39:26.418029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1-rootfs.mount: Deactivated successfully. May 27 17:39:26.438000 kubelet[2696]: I0527 17:39:26.437927 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtbn\" (UniqueName: \"kubernetes.io/projected/1dfc711d-b974-4c03-89b9-9d1d28a9c1d3-kube-api-access-tgtbn\") pod \"calico-apiserver-6888b85474-cskwn\" (UID: \"1dfc711d-b974-4c03-89b9-9d1d28a9c1d3\") " pod="calico-apiserver/calico-apiserver-6888b85474-cskwn" May 27 17:39:26.438000 kubelet[2696]: I0527 17:39:26.437983 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1dfc711d-b974-4c03-89b9-9d1d28a9c1d3-calico-apiserver-certs\") pod \"calico-apiserver-6888b85474-cskwn\" (UID: \"1dfc711d-b974-4c03-89b9-9d1d28a9c1d3\") " pod="calico-apiserver/calico-apiserver-6888b85474-cskwn" May 27 17:39:26.479490 containerd[1555]: time="2025-05-27T17:39:26.478510896Z" level=info msg="Container 64694d7769500808f04f14f520c4dde0afeede12708b0bcda08ace2e04f6fc62: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:26.537661 containerd[1555]: time="2025-05-27T17:39:26.537455460Z" level=info msg="StopContainer for \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" returns successfully" May 27 17:39:26.538358 containerd[1555]: time="2025-05-27T17:39:26.538323911Z" level=info msg="CreateContainer within sandbox \"209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"64694d7769500808f04f14f520c4dde0afeede12708b0bcda08ace2e04f6fc62\"" May 27 17:39:26.538731 containerd[1555]: time="2025-05-27T17:39:26.538698964Z" level=info msg="StopPodSandbox for \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\"" May 27 17:39:26.539741 containerd[1555]: time="2025-05-27T17:39:26.539091221Z" level=info msg="StartContainer for \"64694d7769500808f04f14f520c4dde0afeede12708b0bcda08ace2e04f6fc62\"" May 27 17:39:26.541531 containerd[1555]: time="2025-05-27T17:39:26.541098478Z" level=info msg="connecting to shim 64694d7769500808f04f14f520c4dde0afeede12708b0bcda08ace2e04f6fc62" address="unix:///run/containerd/s/6a3bea2c6c4c69a174200f9b5c1d53d804c5cd48a1affdca2019e752cb39ce19" protocol=ttrpc version=3 May 27 17:39:26.558181 containerd[1555]: time="2025-05-27T17:39:26.558124846Z" level=info msg="Container to stop \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:39:26.568626 systemd[1]: cri-containerd-aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e.scope: Deactivated successfully. May 27 17:39:26.577226 containerd[1555]: time="2025-05-27T17:39:26.577189098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" id:\"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" pid:4791 exit_status:137 exited_at:{seconds:1748367566 nanos:576816809}" May 27 17:39:26.583964 systemd[1]: Started cri-containerd-64694d7769500808f04f14f520c4dde0afeede12708b0bcda08ace2e04f6fc62.scope - libcontainer container 64694d7769500808f04f14f520c4dde0afeede12708b0bcda08ace2e04f6fc62. May 27 17:39:26.603481 containerd[1555]: time="2025-05-27T17:39:26.603441572Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:26.604977 containerd[1555]: time="2025-05-27T17:39:26.604943019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:26.605114 containerd[1555]: time="2025-05-27T17:39:26.605058085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:39:26.605332 kubelet[2696]: E0527 17:39:26.605292 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:39:26.605494 kubelet[2696]: E0527 17:39:26.605473 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:39:26.610580 kubelet[2696]: E0527 17:39:26.610497 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c57e48272564815bb33a455bb42c0db,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wr22r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68649cd6d-qn77g_calico-system(97287dcd-fd61-4753-a782-d95c978e039a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:26.613663 containerd[1555]: time="2025-05-27T17:39:26.613573218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:39:26.619634 containerd[1555]: time="2025-05-27T17:39:26.619265011Z" level=info msg="shim disconnected" id=aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e namespace=k8s.io May 27 17:39:26.619634 containerd[1555]: time="2025-05-27T17:39:26.619306138Z" level=warning msg="cleaning up after shim disconnected" id=aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e namespace=k8s.io May 27 17:39:26.630729 containerd[1555]: time="2025-05-27T17:39:26.619316037Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:39:26.650030 containerd[1555]: time="2025-05-27T17:39:26.649895262Z" level=info msg="StartContainer for \"64694d7769500808f04f14f520c4dde0afeede12708b0bcda08ace2e04f6fc62\" returns successfully" May 27 17:39:26.680709 containerd[1555]: time="2025-05-27T17:39:26.680652070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888b85474-cskwn,Uid:1dfc711d-b974-4c03-89b9-9d1d28a9c1d3,Namespace:calico-apiserver,Attempt:0,}" May 27 17:39:26.780237 containerd[1555]: time="2025-05-27T17:39:26.780175368Z" level=info msg="received exit event sandbox_id:\"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" exit_status:137 exited_at:{seconds:1748367566 nanos:576816809}" May 27 17:39:26.823277 systemd-networkd[1486]: calid20576dc7f5: Link DOWN May 27 17:39:26.823288 systemd-networkd[1486]: calid20576dc7f5: Lost carrier May 27 17:39:26.857825 containerd[1555]: time="2025-05-27T17:39:26.857540626Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:26.878012 containerd[1555]: time="2025-05-27T17:39:26.877935496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:26.878394 containerd[1555]: time="2025-05-27T17:39:26.878059328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:39:26.878432 kubelet[2696]: E0527 17:39:26.878295 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:39:26.878432 kubelet[2696]: E0527 17:39:26.878383 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:39:26.879050 kubelet[2696]: E0527 17:39:26.878965 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr22r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68649cd6d-qn77g_calico-system(97287dcd-fd61-4753-a782-d95c978e039a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:26.879234 containerd[1555]: time="2025-05-27T17:39:26.879209657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:39:26.880681 kubelet[2696]: E0527 17:39:26.880636 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-68649cd6d-qn77g" podUID="97287dcd-fd61-4753-a782-d95c978e039a" May 27 17:39:27.043705 systemd-networkd[1486]: cali4e98a6a9c13: Link UP May 27 17:39:27.044259 systemd-networkd[1486]: cali4e98a6a9c13: Gained carrier May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.808 [INFO][5397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0 calico-apiserver-6888b85474- calico-apiserver 1dfc711d-b974-4c03-89b9-9d1d28a9c1d3 1207 0 2025-05-27 17:39:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6888b85474 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6888b85474-cskwn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e98a6a9c13 [] [] }} ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.808 [INFO][5397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.854 [INFO][5430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" HandleID="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Workload="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.854 [INFO][5430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" HandleID="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Workload="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6888b85474-cskwn", "timestamp":"2025-05-27 17:39:26.854787018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.855 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.855 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.855 [INFO][5430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.862 [INFO][5430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.868 [INFO][5430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.873 [INFO][5430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.875 [INFO][5430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.877 [INFO][5430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.877 [INFO][5430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.880 [INFO][5430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:26.886 [INFO][5430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:27.037 [INFO][5430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:27.037 [INFO][5430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" host="localhost" May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:27.037 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:27.101218 containerd[1555]: 2025-05-27 17:39:27.037 [INFO][5430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" HandleID="k8s-pod-network.2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Workload="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" May 27 17:39:27.102445 containerd[1555]: 2025-05-27 17:39:27.040 [INFO][5397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0", GenerateName:"calico-apiserver-6888b85474-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dfc711d-b974-4c03-89b9-9d1d28a9c1d3", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 39, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6888b85474", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6888b85474-cskwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e98a6a9c13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:27.102445 containerd[1555]: 2025-05-27 17:39:27.040 [INFO][5397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" May 27 17:39:27.102445 containerd[1555]: 2025-05-27 17:39:27.041 [INFO][5397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e98a6a9c13 ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" May 27 17:39:27.102445 containerd[1555]: 2025-05-27 17:39:27.044 [INFO][5397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" May 27 17:39:27.102445 containerd[1555]: 2025-05-27 17:39:27.044 [INFO][5397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0", GenerateName:"calico-apiserver-6888b85474-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dfc711d-b974-4c03-89b9-9d1d28a9c1d3", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 39, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6888b85474", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c", Pod:"calico-apiserver-6888b85474-cskwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e98a6a9c13", MAC:"1e:c7:6b:96:c2:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:39:27.102445 containerd[1555]: 2025-05-27 17:39:27.096 [INFO][5397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" Namespace="calico-apiserver" Pod="calico-apiserver-6888b85474-cskwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6888b85474--cskwn-eth0" May 27 17:39:27.127702 containerd[1555]: time="2025-05-27T17:39:27.127641864Z" level=info msg="connecting to shim 2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c" address="unix:///run/containerd/s/d1edced41bfbe5eddfc60585bdd262df599f80f41782e6353b4b889bb1c434bb" namespace=k8s.io protocol=ttrpc version=3 May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.821 [INFO][5420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.821 [INFO][5420] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" iface="eth0" netns="/var/run/netns/cni-d00673aa-cd9c-d82a-dae0-398cfb59ec53" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.821 [INFO][5420] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" iface="eth0" netns="/var/run/netns/cni-d00673aa-cd9c-d82a-dae0-398cfb59ec53" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.831 [INFO][5420] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" after=10.292348ms iface="eth0" netns="/var/run/netns/cni-d00673aa-cd9c-d82a-dae0-398cfb59ec53" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.831 [INFO][5420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.832 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.855 [INFO][5441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:26.855 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:27.037 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:27.120 [INFO][5441] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:27.120 [INFO][5441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:27.122 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:27.129037 containerd[1555]: 2025-05-27 17:39:27.125 [INFO][5420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:27.137176 containerd[1555]: time="2025-05-27T17:39:27.137138437Z" level=info msg="TearDown network for sandbox \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" successfully" May 27 17:39:27.137261 containerd[1555]: time="2025-05-27T17:39:27.137244306Z" level=info msg="StopPodSandbox for \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" returns successfully" May 27 17:39:27.158853 systemd[1]: Started cri-containerd-2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c.scope - libcontainer container 2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c. May 27 17:39:27.174272 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:39:27.212760 containerd[1555]: time="2025-05-27T17:39:27.212682320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888b85474-cskwn,Uid:1dfc711d-b974-4c03-89b9-9d1d28a9c1d3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c\"" May 27 17:39:27.215624 containerd[1555]: time="2025-05-27T17:39:27.215301736Z" level=info msg="CreateContainer within sandbox \"2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:39:27.243344 kubelet[2696]: I0527 17:39:27.243275 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18982c18-ea20-425e-ae4b-4b49d57db0c3-calico-apiserver-certs\") pod \"18982c18-ea20-425e-ae4b-4b49d57db0c3\" (UID: \"18982c18-ea20-425e-ae4b-4b49d57db0c3\") " May 27 17:39:27.243344 kubelet[2696]: I0527 17:39:27.243323 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftrsl\" (UniqueName: \"kubernetes.io/projected/18982c18-ea20-425e-ae4b-4b49d57db0c3-kube-api-access-ftrsl\") pod \"18982c18-ea20-425e-ae4b-4b49d57db0c3\" (UID: \"18982c18-ea20-425e-ae4b-4b49d57db0c3\") " May 27 17:39:27.250190 kubelet[2696]: I0527 17:39:27.250139 2696 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18982c18-ea20-425e-ae4b-4b49d57db0c3-kube-api-access-ftrsl" (OuterVolumeSpecName: "kube-api-access-ftrsl") pod "18982c18-ea20-425e-ae4b-4b49d57db0c3" (UID: "18982c18-ea20-425e-ae4b-4b49d57db0c3"). InnerVolumeSpecName "kube-api-access-ftrsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:39:27.250367 kubelet[2696]: I0527 17:39:27.250343 2696 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18982c18-ea20-425e-ae4b-4b49d57db0c3-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "18982c18-ea20-425e-ae4b-4b49d57db0c3" (UID: "18982c18-ea20-425e-ae4b-4b49d57db0c3"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:39:27.261608 kubelet[2696]: I0527 17:39:27.261570 2696 scope.go:117] "RemoveContainer" containerID="c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1" May 27 17:39:27.263997 containerd[1555]: time="2025-05-27T17:39:27.263959989Z" level=info msg="RemoveContainer for \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\"" May 27 17:39:27.268834 systemd[1]: Removed slice kubepods-besteffort-pod18982c18_ea20_425e_ae4b_4b49d57db0c3.slice - libcontainer container kubepods-besteffort-pod18982c18_ea20_425e_ae4b_4b49d57db0c3.slice. May 27 17:39:27.344131 kubelet[2696]: I0527 17:39:27.344088 2696 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18982c18-ea20-425e-ae4b-4b49d57db0c3-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 27 17:39:27.344131 kubelet[2696]: I0527 17:39:27.344123 2696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftrsl\" (UniqueName: \"kubernetes.io/projected/18982c18-ea20-425e-ae4b-4b49d57db0c3-kube-api-access-ftrsl\") on node \"localhost\" DevicePath \"\"" May 27 17:39:27.416620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e-rootfs.mount: Deactivated successfully. May 27 17:39:27.416754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e-shm.mount: Deactivated successfully. May 27 17:39:27.416854 systemd[1]: run-netns-cni\x2dd00673aa\x2dcd9c\x2dd82a\x2ddae0\x2d398cfb59ec53.mount: Deactivated successfully. May 27 17:39:27.416945 systemd[1]: var-lib-kubelet-pods-18982c18\x2dea20\x2d425e\x2dae4b\x2d4b49d57db0c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dftrsl.mount: Deactivated successfully. May 27 17:39:27.417036 systemd[1]: var-lib-kubelet-pods-18982c18\x2dea20\x2d425e\x2dae4b\x2d4b49d57db0c3-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 27 17:39:27.491139 containerd[1555]: time="2025-05-27T17:39:27.491083762Z" level=info msg="RemoveContainer for \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" returns successfully" May 27 17:39:27.497958 containerd[1555]: time="2025-05-27T17:39:27.497921215Z" level=info msg="Container a3c5907de3233b9e893d6e8bd1d7d6cb081849407938c5021eb9afa8776f15f4: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:27.498637 kubelet[2696]: I0527 17:39:27.498585 2696 scope.go:117] "RemoveContainer" containerID="c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1" May 27 17:39:27.499207 containerd[1555]: time="2025-05-27T17:39:27.499152625Z" level=error msg="ContainerStatus for \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\": not found" May 27 17:39:27.499428 kubelet[2696]: E0527 17:39:27.499405 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\": not found" containerID="c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1" May 27 17:39:27.499660 kubelet[2696]: I0527 17:39:27.499517 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1"} err="failed to get container status \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2f27bea72dddcadcb164ddff52fcb83698efbe25c1fe942e066551dc6f73ec1\": not found" May 27 17:39:27.562691 containerd[1555]: time="2025-05-27T17:39:27.562648897Z" level=info msg="CreateContainer within sandbox \"2e87d2260bd80d001638584621d8a240b3a6ecf5839f6a0d22a2fa2839e1379c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a3c5907de3233b9e893d6e8bd1d7d6cb081849407938c5021eb9afa8776f15f4\"" May 27 17:39:27.563617 containerd[1555]: time="2025-05-27T17:39:27.563565437Z" level=info msg="StartContainer for \"a3c5907de3233b9e893d6e8bd1d7d6cb081849407938c5021eb9afa8776f15f4\"" May 27 17:39:27.564996 containerd[1555]: time="2025-05-27T17:39:27.564959132Z" level=info msg="connecting to shim a3c5907de3233b9e893d6e8bd1d7d6cb081849407938c5021eb9afa8776f15f4" address="unix:///run/containerd/s/d1edced41bfbe5eddfc60585bdd262df599f80f41782e6353b4b889bb1c434bb" protocol=ttrpc version=3 May 27 17:39:27.590727 systemd[1]: Started cri-containerd-a3c5907de3233b9e893d6e8bd1d7d6cb081849407938c5021eb9afa8776f15f4.scope - libcontainer container a3c5907de3233b9e893d6e8bd1d7d6cb081849407938c5021eb9afa8776f15f4. May 27 17:39:27.643054 containerd[1555]: time="2025-05-27T17:39:27.643008057Z" level=info msg="StartContainer for \"a3c5907de3233b9e893d6e8bd1d7d6cb081849407938c5021eb9afa8776f15f4\" returns successfully" May 27 17:39:28.052964 kubelet[2696]: I0527 17:39:28.052913 2696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18982c18-ea20-425e-ae4b-4b49d57db0c3" path="/var/lib/kubelet/pods/18982c18-ea20-425e-ae4b-4b49d57db0c3/volumes" May 27 17:39:28.088271 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:54180.service - OpenSSH per-connection server daemon (10.0.0.1:54180). May 27 17:39:28.157964 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 54180 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:28.159925 sshd-session[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:28.165185 systemd-logind[1539]: New session 10 of user core. May 27 17:39:28.171801 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:39:28.294374 kubelet[2696]: I0527 17:39:28.294254 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6888b85474-cskwn" podStartSLOduration=2.294236133 podStartE2EDuration="2.294236133s" podCreationTimestamp="2025-05-27 17:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:39:28.293703984 +0000 UTC m=+54.360490629" watchObservedRunningTime="2025-05-27 17:39:28.294236133 +0000 UTC m=+54.361022778" May 27 17:39:28.322108 sshd[5552]: Connection closed by 10.0.0.1 port 54180 May 27 17:39:28.322266 sshd-session[5550]: pam_unix(sshd:session): session closed for user core May 27 17:39:28.331864 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:54180.service: Deactivated successfully. May 27 17:39:28.333871 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:39:28.334692 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. May 27 17:39:28.342442 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:54188.service - OpenSSH per-connection server daemon (10.0.0.1:54188). May 27 17:39:28.370838 systemd-logind[1539]: Removed session 10. May 27 17:39:28.395008 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 54188 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:28.394870 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:28.400433 systemd-logind[1539]: New session 11 of user core. May 27 17:39:28.408821 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:39:28.613442 sshd[5570]: Connection closed by 10.0.0.1 port 54188 May 27 17:39:28.612831 sshd-session[5568]: pam_unix(sshd:session): session closed for user core May 27 17:39:28.630249 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:54188.service: Deactivated successfully. May 27 17:39:28.636073 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:39:28.642067 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. May 27 17:39:28.653952 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:54198.service - OpenSSH per-connection server daemon (10.0.0.1:54198). May 27 17:39:28.658850 systemd-logind[1539]: Removed session 11. May 27 17:39:28.724452 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 54198 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:28.727148 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:28.742306 systemd-logind[1539]: New session 12 of user core. May 27 17:39:28.750006 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:39:28.856935 containerd[1555]: time="2025-05-27T17:39:28.855011620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:28.861377 systemd-networkd[1486]: cali4e98a6a9c13: Gained IPv6LL May 27 17:39:28.866587 containerd[1555]: time="2025-05-27T17:39:28.866296409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 17:39:28.870945 containerd[1555]: time="2025-05-27T17:39:28.870125114Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:28.879259 containerd[1555]: time="2025-05-27T17:39:28.879175940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:28.882649 containerd[1555]: time="2025-05-27T17:39:28.882203613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.002945424s" May 27 17:39:28.882649 containerd[1555]: time="2025-05-27T17:39:28.882326403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 17:39:28.891881 containerd[1555]: time="2025-05-27T17:39:28.890833158Z" level=info msg="CreateContainer within sandbox \"209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:39:28.927657 containerd[1555]: time="2025-05-27T17:39:28.926349373Z" level=info msg="Container 0238c0a0f5d3f9569c99c32198df7ddfbceafc56024ea3bf8f5ae2c2407dab6c: CDI devices from CRI Config.CDIDevices: []" May 27 17:39:28.983527 containerd[1555]: time="2025-05-27T17:39:28.983324439Z" level=info msg="CreateContainer within sandbox \"209f75d9f7a6b4cd3b4923d9486dc69491214065d41343ed0682bf5a71aa2b31\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0238c0a0f5d3f9569c99c32198df7ddfbceafc56024ea3bf8f5ae2c2407dab6c\"" May 27 17:39:28.991216 containerd[1555]: time="2025-05-27T17:39:28.990694410Z" level=info msg="StartContainer for \"0238c0a0f5d3f9569c99c32198df7ddfbceafc56024ea3bf8f5ae2c2407dab6c\"" May 27 17:39:29.005666 containerd[1555]: time="2025-05-27T17:39:29.004863613Z" level=info msg="connecting to shim 0238c0a0f5d3f9569c99c32198df7ddfbceafc56024ea3bf8f5ae2c2407dab6c" address="unix:///run/containerd/s/6a3bea2c6c4c69a174200f9b5c1d53d804c5cd48a1affdca2019e752cb39ce19" protocol=ttrpc version=3 May 27 17:39:29.112328 systemd[1]: Started cri-containerd-0238c0a0f5d3f9569c99c32198df7ddfbceafc56024ea3bf8f5ae2c2407dab6c.scope - libcontainer container 0238c0a0f5d3f9569c99c32198df7ddfbceafc56024ea3bf8f5ae2c2407dab6c. May 27 17:39:29.163886 sshd[5588]: Connection closed by 10.0.0.1 port 54198 May 27 17:39:29.164787 sshd-session[5586]: pam_unix(sshd:session): session closed for user core May 27 17:39:29.177272 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:54198.service: Deactivated successfully. May 27 17:39:29.191514 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:39:29.197899 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. May 27 17:39:29.202993 systemd-logind[1539]: Removed session 12. May 27 17:39:29.287928 containerd[1555]: time="2025-05-27T17:39:29.286870811Z" level=info msg="StartContainer for \"0238c0a0f5d3f9569c99c32198df7ddfbceafc56024ea3bf8f5ae2c2407dab6c\" returns successfully" May 27 17:39:29.288539 kubelet[2696]: I0527 17:39:29.288461 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:39:30.171002 kubelet[2696]: I0527 17:39:30.170936 2696 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:39:30.171002 kubelet[2696]: I0527 17:39:30.170993 2696 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:39:30.303707 kubelet[2696]: I0527 17:39:30.303523 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v2xzb" podStartSLOduration=27.047237413 podStartE2EDuration="35.303507146s" podCreationTimestamp="2025-05-27 17:38:55 +0000 UTC" firstStartedPulling="2025-05-27 17:39:20.631431346 +0000 UTC m=+46.698217981" lastFinishedPulling="2025-05-27 17:39:28.887701069 +0000 UTC m=+54.954487714" observedRunningTime="2025-05-27 17:39:30.302943699 +0000 UTC m=+56.369730344" watchObservedRunningTime="2025-05-27 17:39:30.303507146 +0000 UTC m=+56.370293791" May 27 17:39:34.041049 containerd[1555]: time="2025-05-27T17:39:34.041001903Z" level=info msg="StopPodSandbox for \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\"" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.083 [WARNING][5657] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.084 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.084 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" iface="eth0" netns="" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.084 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.084 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.105 [INFO][5667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.105 [INFO][5667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.106 [INFO][5667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.111 [WARNING][5667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.111 [INFO][5667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.113 [INFO][5667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:34.117953 containerd[1555]: 2025-05-27 17:39:34.115 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.118461 containerd[1555]: time="2025-05-27T17:39:34.118422417Z" level=info msg="TearDown network for sandbox \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" successfully" May 27 17:39:34.118461 containerd[1555]: time="2025-05-27T17:39:34.118452974Z" level=info msg="StopPodSandbox for \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" returns successfully" May 27 17:39:34.119159 containerd[1555]: time="2025-05-27T17:39:34.119122580Z" level=info msg="RemovePodSandbox for \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\"" May 27 17:39:34.119212 containerd[1555]: time="2025-05-27T17:39:34.119173787Z" level=info msg="Forcibly stopping sandbox \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\"" May 27 17:39:34.173256 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:39504.service - OpenSSH per-connection server daemon (10.0.0.1:39504). May 27 17:39:34.242141 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 39504 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:34.244479 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:34.250268 systemd-logind[1539]: New session 13 of user core. May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.218 [WARNING][5685] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" WorkloadEndpoint="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.218 [INFO][5685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.218 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" iface="eth0" netns="" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.218 [INFO][5685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.218 [INFO][5685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.238 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.238 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.238 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.244 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.244 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" HandleID="k8s-pod-network.aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" Workload="localhost-k8s-calico--apiserver--77658bc8b9--pns44-eth0" May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.246 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:34.251640 containerd[1555]: 2025-05-27 17:39:34.249 [INFO][5685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e" May 27 17:39:34.252181 containerd[1555]: time="2025-05-27T17:39:34.251690988Z" level=info msg="TearDown network for sandbox \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" successfully" May 27 17:39:34.254851 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:39:34.257808 containerd[1555]: time="2025-05-27T17:39:34.257758722Z" level=info msg="Ensure that sandbox aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e in task-service has been cleanup successfully" May 27 17:39:34.424731 sshd[5705]: Connection closed by 10.0.0.1 port 39504 May 27 17:39:34.425559 sshd-session[5694]: pam_unix(sshd:session): session closed for user core May 27 17:39:34.429501 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:39504.service: Deactivated successfully. May 27 17:39:34.431711 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:39:34.432533 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. May 27 17:39:34.433924 systemd-logind[1539]: Removed session 13. May 27 17:39:34.985999 containerd[1555]: time="2025-05-27T17:39:34.985918503Z" level=info msg="RemovePodSandbox \"aff42675b852cc9ddea00136b95f5d0ef45f99b9cad76c7154e281518d53666e\" returns successfully" May 27 17:39:39.052047 containerd[1555]: time="2025-05-27T17:39:39.051982436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:39:39.053314 kubelet[2696]: E0527 17:39:39.053238 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-68649cd6d-qn77g" podUID="97287dcd-fd61-4753-a782-d95c978e039a" May 27 17:39:39.304281 containerd[1555]: time="2025-05-27T17:39:39.304078813Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:39.306692 containerd[1555]: time="2025-05-27T17:39:39.306579236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:39.306864 containerd[1555]: time="2025-05-27T17:39:39.306631166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:39:39.306898 kubelet[2696]: E0527 17:39:39.306858 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:39:39.306984 kubelet[2696]: E0527 17:39:39.306912 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:39:39.307145 kubelet[2696]: E0527 17:39:39.307066 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfk8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5nh2v_calico-system(bc5e9290-4a3a-4633-af11-d46d40c33905): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:39.308361 kubelet[2696]: E0527 17:39:39.308298 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5nh2v" podUID="bc5e9290-4a3a-4633-af11-d46d40c33905" May 27 17:39:39.441015 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:39512.service - OpenSSH per-connection server daemon (10.0.0.1:39512). May 27 17:39:39.495773 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 39512 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:39.497419 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:39.501825 systemd-logind[1539]: New session 14 of user core. May 27 17:39:39.517790 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:39:39.631866 sshd[5727]: Connection closed by 10.0.0.1 port 39512 May 27 17:39:39.632079 sshd-session[5725]: pam_unix(sshd:session): session closed for user core May 27 17:39:39.636404 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:39512.service: Deactivated successfully. May 27 17:39:39.638303 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:39:39.639108 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. May 27 17:39:39.640486 systemd-logind[1539]: Removed session 14. May 27 17:39:41.734087 kubelet[2696]: I0527 17:39:41.734033 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:39:41.776061 containerd[1555]: time="2025-05-27T17:39:41.775856891Z" level=info msg="StopContainer for \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" with timeout 30 (s)" May 27 17:39:41.776617 containerd[1555]: time="2025-05-27T17:39:41.776553640Z" level=info msg="Stop container \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" with signal terminated" May 27 17:39:41.801122 systemd[1]: cri-containerd-693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7.scope: Deactivated successfully. May 27 17:39:41.801727 systemd[1]: cri-containerd-693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7.scope: Consumed 1.264s CPU time, 65.1M memory peak, 596K read from disk. May 27 17:39:41.804397 containerd[1555]: time="2025-05-27T17:39:41.804308551Z" level=info msg="received exit event container_id:\"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" id:\"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" pid:5087 exit_status:1 exited_at:{seconds:1748367581 nanos:803489247}" May 27 17:39:41.805317 containerd[1555]: time="2025-05-27T17:39:41.805286371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" id:\"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" pid:5087 exit_status:1 exited_at:{seconds:1748367581 nanos:803489247}" May 27 17:39:41.858646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7-rootfs.mount: Deactivated successfully. May 27 17:39:41.880859 containerd[1555]: time="2025-05-27T17:39:41.880805857Z" level=info msg="StopContainer for \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" returns successfully" May 27 17:39:41.881726 containerd[1555]: time="2025-05-27T17:39:41.881586137Z" level=info msg="StopPodSandbox for \"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\"" May 27 17:39:41.881826 containerd[1555]: time="2025-05-27T17:39:41.881796973Z" level=info msg="Container to stop \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:39:41.890781 systemd[1]: cri-containerd-350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d.scope: Deactivated successfully. May 27 17:39:41.893809 containerd[1555]: time="2025-05-27T17:39:41.893771881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\" id:\"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\" pid:4233 exit_status:137 exited_at:{seconds:1748367581 nanos:892234616}" May 27 17:39:41.926713 containerd[1555]: time="2025-05-27T17:39:41.926624111Z" level=info msg="shim disconnected" id=350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d namespace=k8s.io May 27 17:39:41.926713 containerd[1555]: time="2025-05-27T17:39:41.926668716Z" level=warning msg="cleaning up after shim disconnected" id=350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d namespace=k8s.io May 27 17:39:41.926713 containerd[1555]: time="2025-05-27T17:39:41.926678586Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:39:41.927453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d-rootfs.mount: Deactivated successfully. May 27 17:39:41.950245 containerd[1555]: time="2025-05-27T17:39:41.949585007Z" level=info msg="received exit event sandbox_id:\"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\" exit_status:137 exited_at:{seconds:1748367581 nanos:892234616}" May 27 17:39:41.952548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d-shm.mount: Deactivated successfully. May 27 17:39:42.019938 systemd-networkd[1486]: cali438030f4b88: Link DOWN May 27 17:39:42.019950 systemd-networkd[1486]: cali438030f4b88: Lost carrier May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.017 [INFO][5820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.018 [INFO][5820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" iface="eth0" netns="/var/run/netns/cni-3a41cda0-a9d5-8fff-d9e8-198979c27ecd" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.018 [INFO][5820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" iface="eth0" netns="/var/run/netns/cni-3a41cda0-a9d5-8fff-d9e8-198979c27ecd" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.026 [INFO][5820] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" after=7.889436ms iface="eth0" netns="/var/run/netns/cni-3a41cda0-a9d5-8fff-d9e8-198979c27ecd" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.026 [INFO][5820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.026 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.047 [INFO][5833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" HandleID="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Workload="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.048 [INFO][5833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.048 [INFO][5833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.118 [INFO][5833] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" HandleID="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Workload="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.118 [INFO][5833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" HandleID="k8s-pod-network.350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" Workload="localhost-k8s-calico--apiserver--77658bc8b9--7hcld-eth0" May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.119 [INFO][5833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:39:42.127513 containerd[1555]: 2025-05-27 17:39:42.122 [INFO][5820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d" May 27 17:39:42.130798 containerd[1555]: time="2025-05-27T17:39:42.128129620Z" level=info msg="TearDown network for sandbox \"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\" successfully" May 27 17:39:42.130798 containerd[1555]: time="2025-05-27T17:39:42.128159768Z" level=info msg="StopPodSandbox for \"350094fb4efec768a248289c8b711e22a3d48e98b9fabf03d7f4c0a4a2fb2f0d\" returns successfully" May 27 17:39:42.130730 systemd[1]: run-netns-cni\x2d3a41cda0\x2da9d5\x2d8fff\x2dd9e8\x2d198979c27ecd.mount: Deactivated successfully. May 27 17:39:42.259977 kubelet[2696]: I0527 17:39:42.259923 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-calico-apiserver-certs\") pod \"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1\" (UID: \"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1\") " May 27 17:39:42.259977 kubelet[2696]: I0527 17:39:42.259964 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvksc\" (UniqueName: \"kubernetes.io/projected/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-kube-api-access-wvksc\") pod \"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1\" (UID: \"1e33f0f8-4ca3-40e9-893d-92f7065bb1f1\") " May 27 17:39:42.264407 kubelet[2696]: I0527 17:39:42.264357 2696 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-kube-api-access-wvksc" (OuterVolumeSpecName: "kube-api-access-wvksc") pod "1e33f0f8-4ca3-40e9-893d-92f7065bb1f1" (UID: "1e33f0f8-4ca3-40e9-893d-92f7065bb1f1"). InnerVolumeSpecName "kube-api-access-wvksc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:39:42.265528 systemd[1]: var-lib-kubelet-pods-1e33f0f8\x2d4ca3\x2d40e9\x2d893d\x2d92f7065bb1f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwvksc.mount: Deactivated successfully. May 27 17:39:42.265792 kubelet[2696]: I0527 17:39:42.265529 2696 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "1e33f0f8-4ca3-40e9-893d-92f7065bb1f1" (UID: "1e33f0f8-4ca3-40e9-893d-92f7065bb1f1"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:39:42.265674 systemd[1]: var-lib-kubelet-pods-1e33f0f8\x2d4ca3\x2d40e9\x2d893d\x2d92f7065bb1f1-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 27 17:39:42.318988 kubelet[2696]: I0527 17:39:42.318831 2696 scope.go:117] "RemoveContainer" containerID="693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7" May 27 17:39:42.321735 containerd[1555]: time="2025-05-27T17:39:42.321172896Z" level=info msg="RemoveContainer for \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\"" May 27 17:39:42.324865 systemd[1]: Removed slice kubepods-besteffort-pod1e33f0f8_4ca3_40e9_893d_92f7065bb1f1.slice - libcontainer container kubepods-besteffort-pod1e33f0f8_4ca3_40e9_893d_92f7065bb1f1.slice. May 27 17:39:42.324996 systemd[1]: kubepods-besteffort-pod1e33f0f8_4ca3_40e9_893d_92f7065bb1f1.slice: Consumed 1.293s CPU time, 65.4M memory peak, 596K read from disk. May 27 17:39:42.327008 containerd[1555]: time="2025-05-27T17:39:42.326972046Z" level=info msg="RemoveContainer for \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" returns successfully" May 27 17:39:42.327235 kubelet[2696]: I0527 17:39:42.327130 2696 scope.go:117] "RemoveContainer" containerID="693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7" May 27 17:39:42.327634 containerd[1555]: time="2025-05-27T17:39:42.327498838Z" level=error msg="ContainerStatus for \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\": not found" May 27 17:39:42.327778 kubelet[2696]: E0527 17:39:42.327761 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\": not found" containerID="693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7" May 27 17:39:42.327854 kubelet[2696]: I0527 17:39:42.327783 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7"} err="failed to get container status \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"693b95c954b97a3897d8adc435e6d1bb204ca18ab7e1c8c042bc1ecb7ac082c7\": not found" May 27 17:39:42.361207 kubelet[2696]: I0527 17:39:42.361166 2696 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 27 17:39:42.361207 kubelet[2696]: I0527 17:39:42.361194 2696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wvksc\" (UniqueName: \"kubernetes.io/projected/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1-kube-api-access-wvksc\") on node \"localhost\" DevicePath \"\"" May 27 17:39:43.258624 containerd[1555]: time="2025-05-27T17:39:43.258557188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad\" id:\"67c8658eb22c95ee1ee5358db8de141482ed82475592399098111bd76d3e91c5\" pid:5863 exited_at:{seconds:1748367583 nanos:258263313}" May 27 17:39:43.448166 containerd[1555]: time="2025-05-27T17:39:43.448106016Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1748367581 nanos:892234616}" May 27 17:39:44.053169 kubelet[2696]: I0527 17:39:44.053118 2696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e33f0f8-4ca3-40e9-893d-92f7065bb1f1" path="/var/lib/kubelet/pods/1e33f0f8-4ca3-40e9-893d-92f7065bb1f1/volumes" May 27 17:39:44.647496 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:41360.service - OpenSSH per-connection server daemon (10.0.0.1:41360). May 27 17:39:44.705622 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 41360 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:44.707209 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:44.711707 systemd-logind[1539]: New session 15 of user core. May 27 17:39:44.721748 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:39:44.836917 sshd[5878]: Connection closed by 10.0.0.1 port 41360 May 27 17:39:44.837253 sshd-session[5876]: pam_unix(sshd:session): session closed for user core May 27 17:39:44.842202 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:41360.service: Deactivated successfully. May 27 17:39:44.844423 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:39:44.845219 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. May 27 17:39:44.846969 systemd-logind[1539]: Removed session 15. May 27 17:39:46.050818 kubelet[2696]: E0527 17:39:46.050759 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:39:49.850849 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:41374.service - OpenSSH per-connection server daemon (10.0.0.1:41374). May 27 17:39:49.909783 sshd[5893]: Accepted publickey for core from 10.0.0.1 port 41374 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:49.911745 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:49.916860 systemd-logind[1539]: New session 16 of user core. May 27 17:39:49.930756 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:39:50.043733 sshd[5896]: Connection closed by 10.0.0.1 port 41374 May 27 17:39:50.044122 sshd-session[5893]: pam_unix(sshd:session): session closed for user core May 27 17:39:50.051334 containerd[1555]: time="2025-05-27T17:39:50.051302146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:39:50.056856 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:41374.service: Deactivated successfully. May 27 17:39:50.059186 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:39:50.061070 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. May 27 17:39:50.064879 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:41388.service - OpenSSH per-connection server daemon (10.0.0.1:41388). May 27 17:39:50.065674 systemd-logind[1539]: Removed session 16. May 27 17:39:50.125169 sshd[5910]: Accepted publickey for core from 10.0.0.1 port 41388 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:50.126953 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:50.131706 systemd-logind[1539]: New session 17 of user core. May 27 17:39:50.141756 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:39:50.307719 containerd[1555]: time="2025-05-27T17:39:50.307655633Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:50.309864 containerd[1555]: time="2025-05-27T17:39:50.309822707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:50.309973 containerd[1555]: time="2025-05-27T17:39:50.309923699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:39:50.310149 kubelet[2696]: E0527 17:39:50.310076 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:39:50.310570 kubelet[2696]: E0527 17:39:50.310157 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:39:50.310570 kubelet[2696]: E0527 17:39:50.310278 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c57e48272564815bb33a455bb42c0db,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wr22r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68649cd6d-qn77g_calico-system(97287dcd-fd61-4753-a782-d95c978e039a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:50.313063 containerd[1555]: time="2025-05-27T17:39:50.313024678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:39:50.436540 sshd[5912]: Connection closed by 10.0.0.1 port 41388 May 27 17:39:50.437089 sshd-session[5910]: pam_unix(sshd:session): session closed for user core May 27 17:39:50.453437 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:41388.service: Deactivated successfully. May 27 17:39:50.455641 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:39:50.456680 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. May 27 17:39:50.460049 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:41396.service - OpenSSH per-connection server daemon (10.0.0.1:41396). May 27 17:39:50.461012 systemd-logind[1539]: Removed session 17. May 27 17:39:50.516520 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 41396 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:50.518481 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:50.524175 systemd-logind[1539]: New session 18 of user core. May 27 17:39:50.533736 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:39:50.587852 containerd[1555]: time="2025-05-27T17:39:50.587786141Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:39:50.589274 containerd[1555]: time="2025-05-27T17:39:50.589196668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:39:50.589477 containerd[1555]: time="2025-05-27T17:39:50.589284466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:39:50.589622 kubelet[2696]: E0527 17:39:50.589526 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:39:50.589760 kubelet[2696]: E0527 17:39:50.589626 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:39:50.589833 kubelet[2696]: E0527 17:39:50.589771 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr22r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68649cd6d-qn77g_calico-system(97287dcd-fd61-4753-a782-d95c978e039a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:39:50.591274 kubelet[2696]: E0527 17:39:50.591235 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-68649cd6d-qn77g" podUID="97287dcd-fd61-4753-a782-d95c978e039a" May 27 17:39:51.324900 sshd[5925]: Connection closed by 10.0.0.1 port 41396 May 27 17:39:51.325462 sshd-session[5923]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.339660 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:41396.service: Deactivated successfully. May 27 17:39:51.344302 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:39:51.347674 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. May 27 17:39:51.350546 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:41400.service - OpenSSH per-connection server daemon (10.0.0.1:41400). May 27 17:39:51.353087 systemd-logind[1539]: Removed session 18. May 27 17:39:51.408402 sshd[5951]: Accepted publickey for core from 10.0.0.1 port 41400 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:51.410298 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:51.415426 systemd-logind[1539]: New session 19 of user core. May 27 17:39:51.424860 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:39:51.657801 sshd[5953]: Connection closed by 10.0.0.1 port 41400 May 27 17:39:51.656969 sshd-session[5951]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.670765 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:41400.service: Deactivated successfully. May 27 17:39:51.672987 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:39:51.673781 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. May 27 17:39:51.677062 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:41408.service - OpenSSH per-connection server daemon (10.0.0.1:41408). May 27 17:39:51.677813 systemd-logind[1539]: Removed session 19. May 27 17:39:51.729404 sshd[5964]: Accepted publickey for core from 10.0.0.1 port 41408 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:51.731308 sshd-session[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:51.737241 systemd-logind[1539]: New session 20 of user core. May 27 17:39:51.749914 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:39:51.868297 sshd[5966]: Connection closed by 10.0.0.1 port 41408 May 27 17:39:51.868690 sshd-session[5964]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.873186 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:41408.service: Deactivated successfully. May 27 17:39:51.875344 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:39:51.876303 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. May 27 17:39:51.878144 systemd-logind[1539]: Removed session 20. May 27 17:39:54.051694 kubelet[2696]: E0527 17:39:54.051587 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5nh2v" podUID="bc5e9290-4a3a-4633-af11-d46d40c33905" May 27 17:39:54.299458 containerd[1555]: time="2025-05-27T17:39:54.299371985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14fcac6618afe17922cff32fd17679c13f9285b540991779b354f81afa7abc44\" id:\"58d2b44d0de76abc4a5c5e04ae039586ff6ed17c6547bc4e884a42e15e779c34\" pid:5992 exited_at:{seconds:1748367594 nanos:299067805}" May 27 17:39:56.887905 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:54862.service - OpenSSH per-connection server daemon (10.0.0.1:54862). May 27 17:39:56.949201 sshd[6003]: Accepted publickey for core from 10.0.0.1 port 54862 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:39:56.951463 sshd-session[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:56.957313 systemd-logind[1539]: New session 21 of user core. May 27 17:39:56.971913 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:39:57.106234 sshd[6005]: Connection closed by 10.0.0.1 port 54862 May 27 17:39:57.106569 sshd-session[6003]: pam_unix(sshd:session): session closed for user core May 27 17:39:57.110570 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:54862.service: Deactivated successfully. May 27 17:39:57.112769 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:39:57.113682 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. May 27 17:39:57.114869 systemd-logind[1539]: Removed session 21. May 27 17:40:02.050994 kubelet[2696]: E0527 17:40:02.050423 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:02.057117 kubelet[2696]: E0527 17:40:02.057011 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-68649cd6d-qn77g" podUID="97287dcd-fd61-4753-a782-d95c978e039a" May 27 17:40:02.123296 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:54866.service - OpenSSH per-connection server daemon (10.0.0.1:54866). May 27 17:40:02.181785 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 54866 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:40:02.183696 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:40:02.188421 systemd-logind[1539]: New session 22 of user core. May 27 17:40:02.200827 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:40:02.319648 sshd[6030]: Connection closed by 10.0.0.1 port 54866 May 27 17:40:02.319927 sshd-session[6028]: pam_unix(sshd:session): session closed for user core May 27 17:40:02.324174 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:54866.service: Deactivated successfully. May 27 17:40:02.326460 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:40:02.327370 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. May 27 17:40:02.328707 systemd-logind[1539]: Removed session 22. May 27 17:40:03.050714 kubelet[2696]: E0527 17:40:03.050659 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:06.051237 kubelet[2696]: E0527 17:40:06.050513 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:06.052064 containerd[1555]: time="2025-05-27T17:40:06.052030948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:40:06.398141 containerd[1555]: time="2025-05-27T17:40:06.397965276Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:40:06.409179 containerd[1555]: time="2025-05-27T17:40:06.409125979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:40:06.409375 containerd[1555]: time="2025-05-27T17:40:06.409208846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:40:06.409433 kubelet[2696]: E0527 17:40:06.409330 2696 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:40:06.409433 kubelet[2696]: E0527 17:40:06.409379 2696 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:40:06.409627 kubelet[2696]: E0527 17:40:06.409529 2696 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfk8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5nh2v_calico-system(bc5e9290-4a3a-4633-af11-d46d40c33905): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:40:06.410750 kubelet[2696]: E0527 17:40:06.410699 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5nh2v" podUID="bc5e9290-4a3a-4633-af11-d46d40c33905" May 27 17:40:07.339004 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:55472.service - OpenSSH per-connection server daemon (10.0.0.1:55472). May 27 17:40:07.408255 sshd[6044]: Accepted publickey for core from 10.0.0.1 port 55472 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:40:07.410384 sshd-session[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:40:07.415833 systemd-logind[1539]: New session 23 of user core. May 27 17:40:07.427932 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:40:07.550333 sshd[6046]: Connection closed by 10.0.0.1 port 55472 May 27 17:40:07.550671 sshd-session[6044]: pam_unix(sshd:session): session closed for user core May 27 17:40:07.555949 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:55472.service: Deactivated successfully. May 27 17:40:07.558152 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:40:07.559139 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. May 27 17:40:07.560565 systemd-logind[1539]: Removed session 23. May 27 17:40:12.568671 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:55486.service - OpenSSH per-connection server daemon (10.0.0.1:55486). May 27 17:40:12.648983 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 55486 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:40:12.650940 sshd-session[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:40:12.656993 systemd-logind[1539]: New session 24 of user core. May 27 17:40:12.663873 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:40:12.829874 sshd[6063]: Connection closed by 10.0.0.1 port 55486 May 27 17:40:12.830952 sshd-session[6061]: pam_unix(sshd:session): session closed for user core May 27 17:40:12.836730 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:55486.service: Deactivated successfully. May 27 17:40:12.839361 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:40:12.840409 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. May 27 17:40:12.842557 systemd-logind[1539]: Removed session 24. May 27 17:40:13.051672 kubelet[2696]: E0527 17:40:13.051580 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-68649cd6d-qn77g" podUID="97287dcd-fd61-4753-a782-d95c978e039a" May 27 17:40:13.260311 containerd[1555]: time="2025-05-27T17:40:13.260252701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e90c9cd32fdf0f415863f4283130d63def002d1848f6cef743d412cea82afad\" id:\"4b1e3b2aab0f0aa0eaba25275b4e74f088c8ae8434a84210eb69fddc0fed4bf7\" pid:6087 exited_at:{seconds:1748367613 nanos:259616064}"