May 27 17:45:07.946840 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:45:07.946881 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:07.946894 kernel: BIOS-provided physical RAM map: May 27 17:45:07.946904 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 17:45:07.946913 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 27 17:45:07.946923 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 27 17:45:07.946935 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 27 17:45:07.946945 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 27 17:45:07.946958 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 27 17:45:07.946968 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 27 17:45:07.946977 kernel: NX (Execute Disable) protection: active May 27 17:45:07.946987 kernel: APIC: Static calls initialized May 27 17:45:07.946997 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 27 17:45:07.947009 kernel: extended physical RAM map: May 27 17:45:07.947027 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 17:45:07.947038 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 27 17:45:07.947051 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 27 17:45:07.947062 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 27 17:45:07.947073 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 27 17:45:07.947084 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 27 17:45:07.947095 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 27 17:45:07.947106 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 27 17:45:07.947118 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 27 17:45:07.947129 kernel: efi: EFI v2.7 by EDK II May 27 17:45:07.947145 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 27 17:45:07.947158 kernel: secureboot: Secure boot disabled May 27 17:45:07.947170 kernel: SMBIOS 2.7 present. May 27 17:45:07.947183 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 27 17:45:07.947211 kernel: DMI: Memory slots populated: 1/1 May 27 17:45:07.949245 kernel: Hypervisor detected: KVM May 27 17:45:07.949261 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:45:07.949274 kernel: kvm-clock: using sched offset of 5997380979 cycles May 27 17:45:07.949289 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:45:07.949302 kernel: tsc: Detected 2500.004 MHz processor May 27 17:45:07.949316 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:45:07.949334 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:45:07.949347 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 27 17:45:07.949361 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 17:45:07.949374 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:45:07.949388 kernel: Using GB pages for direct mapping May 27 17:45:07.949406 kernel: ACPI: Early table checksum verification disabled May 27 17:45:07.949422 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 27 17:45:07.949436 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 27 17:45:07.949449 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 27 17:45:07.949461 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 27 17:45:07.949474 kernel: ACPI: FACS 0x00000000789D0000 000040 May 27 17:45:07.949488 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 27 17:45:07.949502 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 27 17:45:07.949515 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 27 17:45:07.949532 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 27 17:45:07.949545 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 27 17:45:07.949558 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 27 17:45:07.949572 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 27 17:45:07.949584 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 27 17:45:07.949597 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 27 17:45:07.949612 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 27 17:45:07.949627 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 27 17:45:07.949644 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 27 17:45:07.949656 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 27 17:45:07.949670 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 27 17:45:07.949684 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 27 17:45:07.949699 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 27 17:45:07.949714 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 27 17:45:07.949727 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 27 17:45:07.949739 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 27 17:45:07.949751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 27 17:45:07.949763 kernel: NUMA: Initialized distance table, cnt=1 May 27 17:45:07.949779 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] May 27 17:45:07.949792 kernel: Zone ranges: May 27 17:45:07.949805 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:45:07.949818 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 27 17:45:07.949832 kernel: Normal empty May 27 17:45:07.949846 kernel: Device empty May 27 17:45:07.949858 kernel: Movable zone start for each node May 27 17:45:07.949871 kernel: Early memory node ranges May 27 17:45:07.949883 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 17:45:07.949931 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 27 17:45:07.949947 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 27 17:45:07.949961 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 27 17:45:07.949975 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:45:07.949990 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 17:45:07.950003 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 27 17:45:07.950018 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 27 17:45:07.950033 kernel: ACPI: PM-Timer IO Port: 0xb008 May 27 17:45:07.950047 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:45:07.950064 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 27 17:45:07.950078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:45:07.950092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:45:07.950106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:45:07.950121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:45:07.950136 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:45:07.950150 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 17:45:07.950164 kernel: TSC deadline timer available May 27 17:45:07.950179 kernel: CPU topo: Max. logical packages: 1 May 27 17:45:07.950210 kernel: CPU topo: Max. logical dies: 1 May 27 17:45:07.950236 kernel: CPU topo: Max. dies per package: 1 May 27 17:45:07.950249 kernel: CPU topo: Max. threads per core: 2 May 27 17:45:07.950261 kernel: CPU topo: Num. cores per package: 1 May 27 17:45:07.950274 kernel: CPU topo: Num. threads per package: 2 May 27 17:45:07.950289 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 17:45:07.950303 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 17:45:07.950318 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 27 17:45:07.950332 kernel: Booting paravirtualized kernel on KVM May 27 17:45:07.950346 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:45:07.950363 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 17:45:07.950376 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 17:45:07.950391 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 17:45:07.950405 kernel: pcpu-alloc: [0] 0 1 May 27 17:45:07.950420 kernel: kvm-guest: PV spinlocks enabled May 27 17:45:07.950435 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:45:07.950453 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:07.950468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:45:07.950488 kernel: random: crng init done May 27 17:45:07.950502 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:45:07.950517 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 17:45:07.950531 kernel: Fallback order for Node 0: 0 May 27 17:45:07.950546 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 May 27 17:45:07.950560 kernel: Policy zone: DMA32 May 27 17:45:07.950587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:45:07.950603 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:45:07.950618 kernel: Kernel/User page tables isolation: enabled May 27 17:45:07.950634 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:45:07.950649 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:45:07.950667 kernel: Dynamic Preempt: voluntary May 27 17:45:07.950683 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:45:07.950699 kernel: rcu: RCU event tracing is enabled. May 27 17:45:07.950715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:45:07.950731 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:45:07.950747 kernel: Rude variant of Tasks RCU enabled. May 27 17:45:07.950765 kernel: Tracing variant of Tasks RCU enabled. May 27 17:45:07.950780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:45:07.950796 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:45:07.950812 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:07.950828 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:07.950844 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:07.950859 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 17:45:07.950875 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:45:07.950893 kernel: Console: colour dummy device 80x25 May 27 17:45:07.950908 kernel: printk: legacy console [tty0] enabled May 27 17:45:07.950924 kernel: printk: legacy console [ttyS0] enabled May 27 17:45:07.950940 kernel: ACPI: Core revision 20240827 May 27 17:45:07.950956 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 27 17:45:07.950971 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:45:07.950987 kernel: x2apic enabled May 27 17:45:07.951002 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:45:07.951018 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns May 27 17:45:07.951037 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) May 27 17:45:07.951052 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 27 17:45:07.951068 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 27 17:45:07.951084 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:45:07.951099 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:45:07.951115 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:45:07.951130 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 27 17:45:07.951146 kernel: RETBleed: Vulnerable May 27 17:45:07.951161 kernel: Speculative Store Bypass: Vulnerable May 27 17:45:07.951177 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 27 17:45:07.952291 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 17:45:07.952329 kernel: GDS: Unknown: Dependent on hypervisor status May 27 17:45:07.952346 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 17:45:07.952361 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:45:07.952377 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:45:07.952393 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:45:07.952408 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 27 17:45:07.952423 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 27 17:45:07.952439 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 27 17:45:07.952454 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 27 17:45:07.952470 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 27 17:45:07.952488 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 27 17:45:07.952504 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:45:07.952519 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 27 17:45:07.952534 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 27 17:45:07.952550 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 27 17:45:07.952565 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 27 17:45:07.952580 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 27 17:45:07.952595 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 27 17:45:07.952611 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 27 17:45:07.952627 kernel: Freeing SMP alternatives memory: 32K May 27 17:45:07.952642 kernel: pid_max: default: 32768 minimum: 301 May 27 17:45:07.952657 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:45:07.952676 kernel: landlock: Up and running. May 27 17:45:07.952691 kernel: SELinux: Initializing. May 27 17:45:07.952706 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 17:45:07.952721 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 17:45:07.952737 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 27 17:45:07.952753 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 27 17:45:07.952768 kernel: signal: max sigframe size: 3632 May 27 17:45:07.952784 kernel: rcu: Hierarchical SRCU implementation. May 27 17:45:07.952800 kernel: rcu: Max phase no-delay instances is 400. May 27 17:45:07.952816 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:45:07.952835 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 17:45:07.952851 kernel: smp: Bringing up secondary CPUs ... May 27 17:45:07.952866 kernel: smpboot: x86: Booting SMP configuration: May 27 17:45:07.952882 kernel: .... node #0, CPUs: #1 May 27 17:45:07.952898 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 27 17:45:07.952915 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 27 17:45:07.952931 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:45:07.952946 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) May 27 17:45:07.952965 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125188K reserved, 0K cma-reserved) May 27 17:45:07.952981 kernel: devtmpfs: initialized May 27 17:45:07.952997 kernel: x86/mm: Memory block size: 128MB May 27 17:45:07.953013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 27 17:45:07.953029 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:45:07.953044 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:45:07.953060 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:45:07.953076 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:45:07.953092 kernel: audit: initializing netlink subsys (disabled) May 27 17:45:07.953110 kernel: audit: type=2000 audit(1748367905.051:1): state=initialized audit_enabled=0 res=1 May 27 17:45:07.953126 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:45:07.953141 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:45:07.953157 kernel: cpuidle: using governor menu May 27 17:45:07.953172 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:45:07.953188 kernel: dca service started, version 1.12.1 May 27 17:45:07.954242 kernel: PCI: Using configuration type 1 for base access May 27 17:45:07.954261 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:45:07.954276 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:45:07.954296 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:45:07.954312 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:45:07.954327 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:45:07.954342 kernel: ACPI: Added _OSI(Module Device) May 27 17:45:07.954356 kernel: ACPI: Added _OSI(Processor Device) May 27 17:45:07.954372 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:45:07.954387 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:45:07.954402 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 27 17:45:07.954417 kernel: ACPI: Interpreter enabled May 27 17:45:07.954436 kernel: ACPI: PM: (supports S0 S5) May 27 17:45:07.954451 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:45:07.954467 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:45:07.954482 kernel: PCI: Using E820 reservations for host bridge windows May 27 17:45:07.954496 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 17:45:07.954510 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:45:07.954743 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 17:45:07.954876 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 17:45:07.955008 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 17:45:07.955025 kernel: acpiphp: Slot [3] registered May 27 17:45:07.955039 kernel: acpiphp: Slot [4] registered May 27 17:45:07.955052 kernel: acpiphp: Slot [5] registered May 27 17:45:07.955066 kernel: acpiphp: Slot [6] registered May 27 17:45:07.955080 kernel: acpiphp: Slot [7] registered May 27 17:45:07.955093 kernel: acpiphp: Slot [8] registered May 27 17:45:07.955107 kernel: acpiphp: Slot [9] registered May 27 17:45:07.955120 kernel: acpiphp: Slot [10] registered May 27 17:45:07.955137 kernel: acpiphp: Slot [11] registered May 27 17:45:07.955151 kernel: acpiphp: Slot [12] registered May 27 17:45:07.955164 kernel: acpiphp: Slot [13] registered May 27 17:45:07.955178 kernel: acpiphp: Slot [14] registered May 27 17:45:07.955191 kernel: acpiphp: Slot [15] registered May 27 17:45:07.955218 kernel: acpiphp: Slot [16] registered May 27 17:45:07.955231 kernel: acpiphp: Slot [17] registered May 27 17:45:07.955245 kernel: acpiphp: Slot [18] registered May 27 17:45:07.955258 kernel: acpiphp: Slot [19] registered May 27 17:45:07.955275 kernel: acpiphp: Slot [20] registered May 27 17:45:07.955288 kernel: acpiphp: Slot [21] registered May 27 17:45:07.955302 kernel: acpiphp: Slot [22] registered May 27 17:45:07.955315 kernel: acpiphp: Slot [23] registered May 27 17:45:07.955329 kernel: acpiphp: Slot [24] registered May 27 17:45:07.955342 kernel: acpiphp: Slot [25] registered May 27 17:45:07.955356 kernel: acpiphp: Slot [26] registered May 27 17:45:07.955369 kernel: acpiphp: Slot [27] registered May 27 17:45:07.955394 kernel: acpiphp: Slot [28] registered May 27 17:45:07.955408 kernel: acpiphp: Slot [29] registered May 27 17:45:07.955425 kernel: acpiphp: Slot [30] registered May 27 17:45:07.955439 kernel: acpiphp: Slot [31] registered May 27 17:45:07.955452 kernel: PCI host bridge to bus 0000:00 May 27 17:45:07.955582 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:45:07.955697 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:45:07.955810 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:45:07.955922 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 27 17:45:07.956038 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 27 17:45:07.956156 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:45:07.957528 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 17:45:07.957689 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 17:45:07.957834 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint May 27 17:45:07.957972 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 27 17:45:07.958111 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 27 17:45:07.959264 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 27 17:45:07.959458 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 27 17:45:07.959597 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 27 17:45:07.959733 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 27 17:45:07.959867 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 27 17:45:07.960012 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint May 27 17:45:07.960154 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] May 27 17:45:07.962041 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 17:45:07.962190 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 17:45:07.962390 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint May 27 17:45:07.962544 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] May 27 17:45:07.962691 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint May 27 17:45:07.962824 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] May 27 17:45:07.962851 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:45:07.962868 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:45:07.962884 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:45:07.962901 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:45:07.962918 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 17:45:07.962934 kernel: iommu: Default domain type: Translated May 27 17:45:07.962950 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:45:07.962966 kernel: efivars: Registered efivars operations May 27 17:45:07.962984 kernel: PCI: Using ACPI for IRQ routing May 27 17:45:07.962999 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:45:07.963014 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 27 17:45:07.963028 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 27 17:45:07.963042 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 27 17:45:07.963173 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 27 17:45:07.964240 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 27 17:45:07.964383 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 17:45:07.964403 kernel: vgaarb: loaded May 27 17:45:07.964424 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 27 17:45:07.964439 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 27 17:45:07.964453 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:45:07.964468 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:45:07.964483 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:45:07.964497 kernel: pnp: PnP ACPI init May 27 17:45:07.964511 kernel: pnp: PnP ACPI: found 5 devices May 27 17:45:07.964526 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:45:07.964540 kernel: NET: Registered PF_INET protocol family May 27 17:45:07.964558 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:45:07.964572 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 17:45:07.964588 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:45:07.964602 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:45:07.964617 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 17:45:07.964632 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 17:45:07.964646 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 17:45:07.964660 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 17:45:07.964678 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:45:07.964693 kernel: NET: Registered PF_XDP protocol family May 27 17:45:07.964818 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:45:07.964935 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:45:07.965048 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:45:07.965161 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 27 17:45:07.965302 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 27 17:45:07.965454 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 17:45:07.965477 kernel: PCI: CLS 0 bytes, default 64 May 27 17:45:07.965500 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 17:45:07.965518 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns May 27 17:45:07.965535 kernel: clocksource: Switched to clocksource tsc May 27 17:45:07.965552 kernel: Initialise system trusted keyrings May 27 17:45:07.965569 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 17:45:07.965586 kernel: Key type asymmetric registered May 27 17:45:07.965603 kernel: Asymmetric key parser 'x509' registered May 27 17:45:07.965619 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:45:07.965639 kernel: io scheduler mq-deadline registered May 27 17:45:07.965654 kernel: io scheduler kyber registered May 27 17:45:07.965668 kernel: io scheduler bfq registered May 27 17:45:07.965684 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:45:07.965698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:45:07.965713 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:45:07.965729 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:45:07.965744 kernel: i8042: Warning: Keylock active May 27 17:45:07.965759 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:45:07.965774 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:45:07.965925 kernel: rtc_cmos 00:00: RTC can wake from S4 May 27 17:45:07.966048 kernel: rtc_cmos 00:00: registered as rtc0 May 27 17:45:07.966168 kernel: rtc_cmos 00:00: setting system clock to 2025-05-27T17:45:07 UTC (1748367907) May 27 17:45:07.966318 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 27 17:45:07.966366 kernel: intel_pstate: CPU model not supported May 27 17:45:07.966386 kernel: efifb: probing for efifb May 27 17:45:07.966402 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 27 17:45:07.966421 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 27 17:45:07.966438 kernel: efifb: scrolling: redraw May 27 17:45:07.966455 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:45:07.966471 kernel: Console: switching to colour frame buffer device 100x37 May 27 17:45:07.966489 kernel: fb0: EFI VGA frame buffer device May 27 17:45:07.966506 kernel: pstore: Using crash dump compression: deflate May 27 17:45:07.966522 kernel: pstore: Registered efi_pstore as persistent store backend May 27 17:45:07.966539 kernel: NET: Registered PF_INET6 protocol family May 27 17:45:07.966556 kernel: Segment Routing with IPv6 May 27 17:45:07.966575 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:45:07.966592 kernel: NET: Registered PF_PACKET protocol family May 27 17:45:07.966609 kernel: Key type dns_resolver registered May 27 17:45:07.966625 kernel: IPI shorthand broadcast: enabled May 27 17:45:07.966641 kernel: sched_clock: Marking stable (2779002010, 157580640)->(3015434639, -78851989) May 27 17:45:07.966661 kernel: registered taskstats version 1 May 27 17:45:07.966678 kernel: Loading compiled-in X.509 certificates May 27 17:45:07.966694 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:45:07.966711 kernel: Demotion targets for Node 0: null May 27 17:45:07.966731 kernel: Key type .fscrypt registered May 27 17:45:07.966747 kernel: Key type fscrypt-provisioning registered May 27 17:45:07.966763 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:45:07.966780 kernel: ima: Allocated hash algorithm: sha1 May 27 17:45:07.966795 kernel: ima: No architecture policies found May 27 17:45:07.966811 kernel: clk: Disabling unused clocks May 27 17:45:07.966828 kernel: Warning: unable to open an initial console. May 27 17:45:07.966844 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:45:07.966861 kernel: Write protecting the kernel read-only data: 24576k May 27 17:45:07.966881 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:45:07.966900 kernel: Run /init as init process May 27 17:45:07.966917 kernel: with arguments: May 27 17:45:07.966933 kernel: /init May 27 17:45:07.966949 kernel: with environment: May 27 17:45:07.966965 kernel: HOME=/ May 27 17:45:07.966985 kernel: TERM=linux May 27 17:45:07.967001 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:45:07.967019 systemd[1]: Successfully made /usr/ read-only. May 27 17:45:07.967041 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:45:07.967059 systemd[1]: Detected virtualization amazon. May 27 17:45:07.967077 systemd[1]: Detected architecture x86-64. May 27 17:45:07.967093 systemd[1]: Running in initrd. May 27 17:45:07.967113 systemd[1]: No hostname configured, using default hostname. May 27 17:45:07.967131 systemd[1]: Hostname set to . May 27 17:45:07.967148 systemd[1]: Initializing machine ID from VM UUID. May 27 17:45:07.967165 systemd[1]: Queued start job for default target initrd.target. May 27 17:45:07.967182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:07.967253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:07.967272 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:45:07.967290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:45:07.967311 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:45:07.967330 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:45:07.967349 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:45:07.967368 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:45:07.967394 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:07.967412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:07.967429 systemd[1]: Reached target paths.target - Path Units. May 27 17:45:07.967450 systemd[1]: Reached target slices.target - Slice Units. May 27 17:45:07.967468 systemd[1]: Reached target swap.target - Swaps. May 27 17:45:07.967486 systemd[1]: Reached target timers.target - Timer Units. May 27 17:45:07.967504 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:45:07.967522 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:45:07.967539 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:45:07.967557 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:45:07.967575 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:07.967595 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:45:07.967613 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:07.967631 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:45:07.967648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:45:07.967667 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:45:07.967684 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:45:07.967703 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:45:07.967720 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:45:07.967738 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:45:07.967758 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:45:07.967776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:07.967794 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:45:07.967813 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:07.967865 systemd-journald[207]: Collecting audit messages is disabled. May 27 17:45:07.967905 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:45:07.967924 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:45:07.967944 systemd-journald[207]: Journal started May 27 17:45:07.967984 systemd-journald[207]: Runtime Journal (/run/log/journal/ec27890ed4e7d1455902795ea7e45700) is 4.8M, max 38.4M, 33.6M free. May 27 17:45:07.944818 systemd-modules-load[208]: Inserted module 'overlay' May 27 17:45:07.975093 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:45:07.975627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:07.984419 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:45:07.996274 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:45:07.994343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:45:08.004484 kernel: Bridge firewalling registered May 27 17:45:08.000586 systemd-modules-load[208]: Inserted module 'br_netfilter' May 27 17:45:08.006996 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 17:45:08.012510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:45:08.013389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:08.019099 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:45:08.020289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:45:08.025759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:45:08.027951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:08.040080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:45:08.046387 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:45:08.049549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:08.050617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:08.063161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:45:08.080513 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:08.125370 systemd-resolved[247]: Positive Trust Anchors: May 27 17:45:08.125390 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:45:08.125456 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:45:08.135938 systemd-resolved[247]: Defaulting to hostname 'linux'. May 27 17:45:08.137411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:45:08.138977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:08.181228 kernel: SCSI subsystem initialized May 27 17:45:08.191226 kernel: Loading iSCSI transport class v2.0-870. May 27 17:45:08.203225 kernel: iscsi: registered transport (tcp) May 27 17:45:08.225454 kernel: iscsi: registered transport (qla4xxx) May 27 17:45:08.225542 kernel: QLogic iSCSI HBA Driver May 27 17:45:08.244500 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:45:08.261618 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:08.262964 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:45:08.309762 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:45:08.311903 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:45:08.368233 kernel: raid6: avx512x4 gen() 17474 MB/s May 27 17:45:08.386223 kernel: raid6: avx512x2 gen() 17849 MB/s May 27 17:45:08.404238 kernel: raid6: avx512x1 gen() 17610 MB/s May 27 17:45:08.422223 kernel: raid6: avx2x4 gen() 17603 MB/s May 27 17:45:08.440235 kernel: raid6: avx2x2 gen() 17711 MB/s May 27 17:45:08.458467 kernel: raid6: avx2x1 gen() 13826 MB/s May 27 17:45:08.458531 kernel: raid6: using algorithm avx512x2 gen() 17849 MB/s May 27 17:45:08.477474 kernel: raid6: .... xor() 24459 MB/s, rmw enabled May 27 17:45:08.477548 kernel: raid6: using avx512x2 recovery algorithm May 27 17:45:08.499236 kernel: xor: automatically using best checksumming function avx May 27 17:45:08.671230 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:45:08.677760 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:45:08.679985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:08.709450 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 27 17:45:08.716326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:08.720614 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:45:08.749882 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation May 27 17:45:08.777704 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:45:08.779759 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:45:08.847298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:08.851190 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:45:08.943227 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:45:08.972230 kernel: AES CTR mode by8 optimization enabled May 27 17:45:08.979320 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 27 17:45:08.979738 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 27 17:45:08.992219 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 27 17:45:08.991088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:09.013336 kernel: nvme nvme0: pci function 0000:00:04.0 May 27 17:45:09.013622 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 17:45:09.013645 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:38:fe:d2:d3:23 May 27 17:45:09.013832 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 May 27 17:45:08.991372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:09.016426 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 17:45:08.999518 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:09.021575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:09.024946 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:09.043325 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:45:09.043375 kernel: GPT:9289727 != 16777215 May 27 17:45:09.043394 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:45:09.043412 kernel: GPT:9289727 != 16777215 May 27 17:45:09.043428 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:45:09.043446 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:45:09.036291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:09.036423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:09.038184 (udev-worker)[506]: Network interface NamePolicy= disabled on kernel command line. May 27 17:45:09.044934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:09.075250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:09.082235 kernel: nvme nvme0: using unchecked data buffer May 27 17:45:09.189606 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 27 17:45:09.244365 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 27 17:45:09.257467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 17:45:09.258371 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:45:09.268936 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 27 17:45:09.269526 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 27 17:45:09.271721 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:45:09.272334 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:09.273483 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:45:09.275285 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:45:09.280393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:45:09.305752 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:45:09.308017 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:45:09.308082 disk-uuid[694]: Primary Header is updated. May 27 17:45:09.308082 disk-uuid[694]: Secondary Entries is updated. May 27 17:45:09.308082 disk-uuid[694]: Secondary Header is updated. May 27 17:45:10.328346 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:45:10.329051 disk-uuid[701]: The operation has completed successfully. May 27 17:45:10.444968 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:45:10.445077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:45:10.476116 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:45:10.498020 sh[962]: Success May 27 17:45:10.524478 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:45:10.524550 kernel: device-mapper: uevent: version 1.0.3 May 27 17:45:10.527217 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:45:10.538220 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 17:45:10.650812 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:45:10.655338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:45:10.663742 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:45:10.686299 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:45:10.690228 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (986) May 27 17:45:10.694473 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:45:10.694533 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:10.696525 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:45:10.870773 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:45:10.872058 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:45:10.873005 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:45:10.874154 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:45:10.877425 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:45:10.917270 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1019) May 27 17:45:10.926169 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:10.926277 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:10.926299 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:45:10.944262 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:10.945937 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:45:10.948475 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:45:10.984944 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:45:10.988135 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:45:11.030525 systemd-networkd[1155]: lo: Link UP May 27 17:45:11.030537 systemd-networkd[1155]: lo: Gained carrier May 27 17:45:11.032307 systemd-networkd[1155]: Enumeration completed May 27 17:45:11.032737 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:11.032744 systemd-networkd[1155]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:45:11.033821 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:45:11.035498 systemd[1]: Reached target network.target - Network. May 27 17:45:11.036563 systemd-networkd[1155]: eth0: Link UP May 27 17:45:11.036568 systemd-networkd[1155]: eth0: Gained carrier May 27 17:45:11.036582 systemd-networkd[1155]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:11.050388 systemd-networkd[1155]: eth0: DHCPv4 address 172.31.23.101/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 17:45:11.452505 ignition[1111]: Ignition 2.21.0 May 27 17:45:11.453169 ignition[1111]: Stage: fetch-offline May 27 17:45:11.453482 ignition[1111]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:11.453493 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:45:11.453821 ignition[1111]: Ignition finished successfully May 27 17:45:11.456244 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:45:11.458103 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:45:11.485557 ignition[1165]: Ignition 2.21.0 May 27 17:45:11.486164 ignition[1165]: Stage: fetch May 27 17:45:11.486601 ignition[1165]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:11.486614 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:45:11.486740 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:45:11.496604 ignition[1165]: PUT result: OK May 27 17:45:11.498455 ignition[1165]: parsed url from cmdline: "" May 27 17:45:11.498466 ignition[1165]: no config URL provided May 27 17:45:11.498475 ignition[1165]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:45:11.498486 ignition[1165]: no config at "/usr/lib/ignition/user.ign" May 27 17:45:11.498513 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:45:11.499210 ignition[1165]: PUT result: OK May 27 17:45:11.499290 ignition[1165]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 27 17:45:11.499966 ignition[1165]: GET result: OK May 27 17:45:11.500051 ignition[1165]: parsing config with SHA512: 1db8dfe8cfc07d598eae23c2f2769e8dc69fd98c92919623fe2f8df5c6c13bd5e527cab70d5dfb9afcb6f246b1db919fdfbd22275fcb94e5116d237d42e9d23d May 27 17:45:11.509940 unknown[1165]: fetched base config from "system" May 27 17:45:11.509958 unknown[1165]: fetched base config from "system" May 27 17:45:11.510529 ignition[1165]: fetch: fetch complete May 27 17:45:11.509965 unknown[1165]: fetched user config from "aws" May 27 17:45:11.510535 ignition[1165]: fetch: fetch passed May 27 17:45:11.510596 ignition[1165]: Ignition finished successfully May 27 17:45:11.513971 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:45:11.515526 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:45:11.543491 ignition[1172]: Ignition 2.21.0 May 27 17:45:11.543510 ignition[1172]: Stage: kargs May 27 17:45:11.543913 ignition[1172]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:11.543926 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:45:11.544044 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:45:11.545455 ignition[1172]: PUT result: OK May 27 17:45:11.549886 ignition[1172]: kargs: kargs passed May 27 17:45:11.549962 ignition[1172]: Ignition finished successfully May 27 17:45:11.551706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:45:11.553533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:45:11.582168 ignition[1179]: Ignition 2.21.0 May 27 17:45:11.582186 ignition[1179]: Stage: disks May 27 17:45:11.582601 ignition[1179]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:11.582614 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:45:11.582722 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:45:11.584212 ignition[1179]: PUT result: OK May 27 17:45:11.588360 ignition[1179]: disks: disks passed May 27 17:45:11.588433 ignition[1179]: Ignition finished successfully May 27 17:45:11.590311 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:45:11.590955 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:45:11.591488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:45:11.592168 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:45:11.592783 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:45:11.593367 systemd[1]: Reached target basic.target - Basic System. May 27 17:45:11.595066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:45:11.639817 systemd-fsck[1188]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:45:11.642545 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:45:11.644376 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:45:11.813242 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:45:11.813817 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:45:11.814796 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:45:11.816716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:45:11.819282 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:45:11.820610 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:45:11.820657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:45:11.820682 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:45:11.829137 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:45:11.831246 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:45:11.848219 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1207) May 27 17:45:11.854918 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:11.854977 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:11.854990 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:45:11.864176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:45:12.221147 initrd-setup-root[1231]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:45:12.244221 initrd-setup-root[1238]: cut: /sysroot/etc/group: No such file or directory May 27 17:45:12.262185 initrd-setup-root[1245]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:45:12.266966 initrd-setup-root[1252]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:45:12.356743 systemd-networkd[1155]: eth0: Gained IPv6LL May 27 17:45:12.572290 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:45:12.574366 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:45:12.577401 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:45:12.593795 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:45:12.596774 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:12.631669 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:45:12.633415 ignition[1320]: INFO : Ignition 2.21.0 May 27 17:45:12.633415 ignition[1320]: INFO : Stage: mount May 27 17:45:12.636503 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:12.636503 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:45:12.636503 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:45:12.636503 ignition[1320]: INFO : PUT result: OK May 27 17:45:12.639696 ignition[1320]: INFO : mount: mount passed May 27 17:45:12.640273 ignition[1320]: INFO : Ignition finished successfully May 27 17:45:12.641253 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:45:12.643139 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:45:12.815900 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:45:12.858234 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1331) May 27 17:45:12.862159 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:12.862237 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:12.862252 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:45:12.872426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:45:12.905217 ignition[1348]: INFO : Ignition 2.21.0 May 27 17:45:12.905217 ignition[1348]: INFO : Stage: files May 27 17:45:12.906706 ignition[1348]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:12.906706 ignition[1348]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:45:12.906706 ignition[1348]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:45:12.906706 ignition[1348]: INFO : PUT result: OK May 27 17:45:12.912312 ignition[1348]: DEBUG : files: compiled without relabeling support, skipping May 27 17:45:12.913473 ignition[1348]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:45:12.913473 ignition[1348]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:45:12.917130 ignition[1348]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:45:12.917885 ignition[1348]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:45:12.917885 ignition[1348]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:45:12.917637 unknown[1348]: wrote ssh authorized keys file for user: core May 27 17:45:12.920429 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:45:12.921268 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 17:45:12.994410 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:45:13.377867 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:45:13.377867 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:45:13.380476 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:45:13.380476 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:45:13.380476 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:45:13.380476 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:45:13.380476 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:45:13.380476 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:45:13.380476 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:45:13.385860 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:45:13.385860 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:45:13.385860 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:45:13.388629 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:45:13.388629 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:45:13.388629 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 17:45:14.107154 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:45:14.770542 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:45:14.770542 ignition[1348]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:45:14.773654 ignition[1348]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:45:14.780523 ignition[1348]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:45:14.780523 ignition[1348]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:45:14.780523 ignition[1348]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 17:45:14.785004 ignition[1348]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:45:14.785004 ignition[1348]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:45:14.785004 ignition[1348]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:45:14.785004 ignition[1348]: INFO : files: files passed May 27 17:45:14.785004 ignition[1348]: INFO : Ignition finished successfully May 27 17:45:14.782999 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:45:14.787533 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:45:14.793723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:45:14.802612 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:45:14.803224 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:45:14.810212 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:14.810212 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:14.813712 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:14.815150 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:45:14.816572 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:45:14.818094 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:45:14.874838 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:45:14.874997 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:45:14.876380 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:45:14.877544 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:45:14.878461 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:45:14.879741 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:45:14.900441 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:45:14.902522 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:45:14.925405 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:14.926169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:14.927276 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:45:14.928224 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:45:14.928461 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:45:14.929583 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:45:14.930463 systemd[1]: Stopped target basic.target - Basic System. May 27 17:45:14.931244 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:45:14.932087 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:45:14.932868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:45:14.933644 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:45:14.934429 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:45:14.935181 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:45:14.936039 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:45:14.937161 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:45:14.937968 systemd[1]: Stopped target swap.target - Swaps. May 27 17:45:14.938701 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:45:14.938943 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:45:14.940070 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:14.940911 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:14.941553 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:45:14.941703 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:14.942376 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:45:14.942596 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:45:14.943714 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:45:14.943959 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:45:14.944624 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:45:14.944826 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:45:14.947416 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:45:14.947912 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:45:14.950331 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:14.952025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:45:14.954306 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:45:14.954521 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:14.957056 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:45:14.957249 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:45:14.963585 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:45:14.963718 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:45:14.986225 ignition[1401]: INFO : Ignition 2.21.0 May 27 17:45:14.986225 ignition[1401]: INFO : Stage: umount May 27 17:45:14.986225 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:14.986225 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:45:14.986225 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:45:14.990278 ignition[1401]: INFO : PUT result: OK May 27 17:45:14.993061 ignition[1401]: INFO : umount: umount passed May 27 17:45:14.993616 ignition[1401]: INFO : Ignition finished successfully May 27 17:45:14.995538 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:45:14.995670 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:45:14.997254 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:45:14.997338 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:45:14.998121 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:45:14.998224 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:45:14.998920 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:45:14.999004 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:45:14.999819 systemd[1]: Stopped target network.target - Network. May 27 17:45:15.000569 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:45:15.000638 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:45:15.002110 systemd[1]: Stopped target paths.target - Path Units. May 27 17:45:15.002743 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:45:15.006281 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:15.006801 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:45:15.007415 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:45:15.008911 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:45:15.008968 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:45:15.009598 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:45:15.009653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:45:15.010219 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:45:15.010311 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:45:15.010930 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:45:15.010993 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:45:15.011854 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:45:15.012466 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:45:15.016265 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:45:15.016958 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:45:15.017063 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:45:15.020980 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:45:15.021715 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:45:15.021816 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:15.024222 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:15.026528 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:45:15.026671 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:45:15.028678 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:45:15.028961 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:45:15.029902 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:45:15.029952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:15.031572 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:45:15.033246 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:45:15.033321 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:45:15.033862 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:45:15.033924 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:15.036416 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:45:15.036483 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:45:15.038769 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:15.043318 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:45:15.049827 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:45:15.050809 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:15.053829 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:45:15.054596 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:45:15.055603 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:45:15.055662 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:15.056572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:45:15.056642 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:45:15.057734 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:45:15.057805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:45:15.058883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:45:15.058957 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:45:15.063012 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:45:15.063607 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:45:15.063691 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:15.067441 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:45:15.067529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:15.069289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:15.069401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:15.072279 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:45:15.074352 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:45:15.081563 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:45:15.081708 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:45:15.177215 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:45:15.177337 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:45:15.178055 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:45:15.178656 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:45:15.178721 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:45:15.180815 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:45:15.198603 systemd[1]: Switching root. May 27 17:45:15.258799 systemd-journald[207]: Journal stopped May 27 17:45:17.498355 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). May 27 17:45:17.498461 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:45:17.498485 kernel: SELinux: policy capability open_perms=1 May 27 17:45:17.498511 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:45:17.498536 kernel: SELinux: policy capability always_check_network=0 May 27 17:45:17.498553 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:45:17.498573 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:45:17.498598 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:45:17.498622 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:45:17.498642 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:45:17.498667 kernel: audit: type=1403 audit(1748367915.684:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:45:17.498687 systemd[1]: Successfully loaded SELinux policy in 111.573ms. May 27 17:45:17.498718 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.319ms. May 27 17:45:17.498741 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:45:17.498762 systemd[1]: Detected virtualization amazon. May 27 17:45:17.498783 systemd[1]: Detected architecture x86-64. May 27 17:45:17.498805 systemd[1]: Detected first boot. May 27 17:45:17.498827 systemd[1]: Initializing machine ID from VM UUID. May 27 17:45:17.498849 zram_generator::config[1446]: No configuration found. May 27 17:45:17.498878 kernel: Guest personality initialized and is inactive May 27 17:45:17.498898 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:45:17.498928 kernel: Initialized host personality May 27 17:45:17.498951 kernel: NET: Registered PF_VSOCK protocol family May 27 17:45:17.498972 systemd[1]: Populated /etc with preset unit settings. May 27 17:45:17.498997 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:45:17.499019 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:45:17.499048 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:45:17.499070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:45:17.499091 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:45:17.499114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:45:17.499139 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:45:17.499160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:45:17.499180 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:45:17.502272 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:45:17.502324 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:45:17.502344 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:45:17.502364 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:17.502385 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:17.502412 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:45:17.502432 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:45:17.502453 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:45:17.502473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:45:17.502494 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:45:17.502513 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:17.502535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:17.502555 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:45:17.502579 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:45:17.502599 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:45:17.502618 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:45:17.502639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:17.502659 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:45:17.502680 systemd[1]: Reached target slices.target - Slice Units. May 27 17:45:17.502702 systemd[1]: Reached target swap.target - Swaps. May 27 17:45:17.502723 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:45:17.502745 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:45:17.502771 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:45:17.502796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:17.502819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:45:17.502839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:17.502860 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:45:17.502883 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:45:17.502903 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:45:17.502924 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:45:17.502947 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:17.502968 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:45:17.502985 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:45:17.503003 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:45:17.503024 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:45:17.503044 systemd[1]: Reached target machines.target - Containers. May 27 17:45:17.503063 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:45:17.503083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:17.503102 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:45:17.503125 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:45:17.503143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:17.503164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:45:17.503183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:45:17.512644 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:45:17.512687 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:45:17.512710 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:45:17.512732 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:45:17.512754 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:45:17.512784 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:45:17.512805 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:45:17.512827 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:17.512849 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:45:17.512873 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:45:17.512894 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:45:17.512916 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:45:17.512939 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:45:17.512964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:45:17.512986 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:45:17.513007 systemd[1]: Stopped verity-setup.service. May 27 17:45:17.513029 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:17.513053 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:45:17.513075 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:45:17.513096 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:45:17.513117 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:45:17.513139 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:45:17.513161 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:45:17.513185 kernel: fuse: init (API version 7.41) May 27 17:45:17.516949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:17.516985 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:45:17.517008 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:45:17.517032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:17.517053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:17.517076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:45:17.517098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:45:17.517118 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:45:17.517146 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:45:17.517168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:45:17.517189 kernel: loop: module loaded May 27 17:45:17.517225 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:45:17.517247 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:45:17.517268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:45:17.517333 systemd-journald[1529]: Collecting audit messages is disabled. May 27 17:45:17.517372 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:45:17.517399 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:45:17.517421 systemd-journald[1529]: Journal started May 27 17:45:17.517463 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec27890ed4e7d1455902795ea7e45700) is 4.8M, max 38.4M, 33.6M free. May 27 17:45:17.539257 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:45:17.539349 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:45:17.052769 systemd[1]: Queued start job for default target multi-user.target. May 27 17:45:17.079139 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 17:45:17.082449 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:45:17.545001 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:45:17.550246 kernel: ACPI: bus type drm_connector registered May 27 17:45:17.550315 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:45:17.554220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:17.563455 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:45:17.563562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:45:17.569263 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:45:17.574230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:45:17.579218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:45:17.592318 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:45:17.602230 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:45:17.612776 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:45:17.616946 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:45:17.618322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:45:17.621408 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:17.624343 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:45:17.626392 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:45:17.628581 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:45:17.630715 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:45:17.648443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:17.658697 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:45:17.659417 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:45:17.661090 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:45:17.663432 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:45:17.668509 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:45:17.677293 kernel: loop0: detected capacity change from 0 to 72352 May 27 17:45:17.702283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:17.705805 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec27890ed4e7d1455902795ea7e45700 is 47.022ms for 1018 entries. May 27 17:45:17.705805 systemd-journald[1529]: System Journal (/var/log/journal/ec27890ed4e7d1455902795ea7e45700) is 8M, max 195.6M, 187.6M free. May 27 17:45:17.779366 systemd-journald[1529]: Received client request to flush runtime journal. May 27 17:45:17.783090 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:45:17.784491 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:45:17.797186 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:45:17.802992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:45:17.827438 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:45:17.857227 kernel: loop1: detected capacity change from 0 to 146240 May 27 17:45:17.886491 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. May 27 17:45:17.886922 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. May 27 17:45:17.897298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:17.986223 kernel: loop2: detected capacity change from 0 to 113872 May 27 17:45:18.086236 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:45:18.102229 kernel: loop3: detected capacity change from 0 to 229808 May 27 17:45:18.388571 kernel: loop4: detected capacity change from 0 to 72352 May 27 17:45:18.417231 kernel: loop5: detected capacity change from 0 to 146240 May 27 17:45:18.443221 kernel: loop6: detected capacity change from 0 to 113872 May 27 17:45:18.458228 kernel: loop7: detected capacity change from 0 to 229808 May 27 17:45:18.503752 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 27 17:45:18.504735 (sd-merge)[1603]: Merged extensions into '/usr'. May 27 17:45:18.510963 systemd[1]: Reload requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:45:18.511104 systemd[1]: Reloading... May 27 17:45:18.599234 zram_generator::config[1625]: No configuration found. May 27 17:45:18.766073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:45:18.880307 systemd[1]: Reloading finished in 368 ms. May 27 17:45:18.910404 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:45:18.911705 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:45:18.922623 systemd[1]: Starting ensure-sysext.service... May 27 17:45:18.927372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:45:18.935943 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:18.973854 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... May 27 17:45:18.973881 systemd[1]: Reloading... May 27 17:45:18.993007 systemd-udevd[1683]: Using default interface naming scheme 'v255'. May 27 17:45:19.008252 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:45:19.008791 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:45:19.011047 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:45:19.011415 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:45:19.012700 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:45:19.013063 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. May 27 17:45:19.013154 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. May 27 17:45:19.022753 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:45:19.022769 systemd-tmpfiles[1682]: Skipping /boot May 27 17:45:19.085469 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:45:19.085486 systemd-tmpfiles[1682]: Skipping /boot May 27 17:45:19.113921 zram_generator::config[1710]: No configuration found. May 27 17:45:19.391704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:45:19.432703 (udev-worker)[1720]: Network interface NamePolicy= disabled on kernel command line. May 27 17:45:19.466241 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:45:19.539227 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 17:45:19.560219 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:45:19.560312 kernel: ACPI: button: Power Button [PWRF] May 27 17:45:19.570222 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 May 27 17:45:19.605226 kernel: ACPI: button: Sleep Button [SLPF] May 27 17:45:19.616346 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:45:19.616793 systemd[1]: Reloading finished in 642 ms. May 27 17:45:19.633850 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 27 17:45:19.632894 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:19.634769 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:45:19.647043 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:19.668487 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:45:19.678479 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:45:19.686489 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:45:19.692509 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:45:19.698493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:45:19.705398 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:45:19.721974 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:45:19.727719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:19.728703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:19.734964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:19.745623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:45:19.756448 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:45:19.757727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:19.758052 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:19.758424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:19.766531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:19.766775 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:19.776744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:19.777064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:19.791463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:19.792271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:19.792476 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:19.792635 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:19.814158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:19.814833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:19.816924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:45:19.817886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:19.818154 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:19.818501 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:45:19.820412 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:19.821682 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:45:19.825875 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:45:19.827787 systemd[1]: Finished ensure-sysext.service. May 27 17:45:19.844417 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:45:19.864587 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:45:19.865058 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:45:19.880181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:19.882092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:45:19.883064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:45:19.892284 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:45:19.900104 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:45:19.902512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:45:19.903908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:19.904158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:19.912333 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:45:19.920012 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:45:19.923580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:19.923876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:19.927956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:20.001410 augenrules[1861]: No rules May 27 17:45:20.002622 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:45:20.002890 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:45:20.018926 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:45:20.020316 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:45:20.022145 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:45:20.126312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:20.267999 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 17:45:20.270316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:45:20.320433 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:45:20.344912 systemd-networkd[1817]: lo: Link UP May 27 17:45:20.344930 systemd-networkd[1817]: lo: Gained carrier May 27 17:45:20.346724 systemd-networkd[1817]: Enumeration completed May 27 17:45:20.346868 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:45:20.349582 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:20.349597 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:45:20.351068 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:45:20.356322 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:45:20.357398 systemd-networkd[1817]: eth0: Link UP May 27 17:45:20.357587 systemd-networkd[1817]: eth0: Gained carrier May 27 17:45:20.357622 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:20.368588 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.23.101/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 17:45:20.381302 systemd-resolved[1818]: Positive Trust Anchors: May 27 17:45:20.381683 systemd-resolved[1818]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:45:20.381820 systemd-resolved[1818]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:45:20.383405 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:45:20.387281 systemd-resolved[1818]: Defaulting to hostname 'linux'. May 27 17:45:20.389860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:45:20.390454 systemd[1]: Reached target network.target - Network. May 27 17:45:20.390892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:20.391334 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:45:20.391820 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:45:20.392254 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:45:20.392635 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:45:20.393144 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:45:20.393665 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:45:20.394037 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:45:20.394433 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:45:20.394476 systemd[1]: Reached target paths.target - Path Units. May 27 17:45:20.394848 systemd[1]: Reached target timers.target - Timer Units. May 27 17:45:20.396751 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:45:20.398691 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:45:20.401817 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:45:20.402564 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:45:20.403059 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:45:20.405814 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:45:20.406758 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:45:20.407944 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:45:20.409295 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:45:20.409743 systemd[1]: Reached target basic.target - Basic System. May 27 17:45:20.410183 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:45:20.410285 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:45:20.411383 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:45:20.414354 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:45:20.417482 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:45:20.419568 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:45:20.425972 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:45:20.429109 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:45:20.429754 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:45:20.431858 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:45:20.437025 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:45:20.444824 systemd[1]: Started ntpd.service - Network Time Service. May 27 17:45:20.448356 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:45:20.452492 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 17:45:20.460445 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:45:20.460632 jq[1970]: false May 27 17:45:20.467475 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:45:20.484764 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:45:20.487777 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:45:20.489604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:45:20.490610 google_oslogin_nss_cache[1972]: oslogin_cache_refresh[1972]: Refreshing passwd entry cache May 27 17:45:20.490942 oslogin_cache_refresh[1972]: Refreshing passwd entry cache May 27 17:45:20.495509 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:45:20.505364 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:45:20.514146 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:45:20.517474 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:45:20.517757 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:45:20.521153 google_oslogin_nss_cache[1972]: oslogin_cache_refresh[1972]: Failure getting users, quitting May 27 17:45:20.521153 google_oslogin_nss_cache[1972]: oslogin_cache_refresh[1972]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:45:20.521153 google_oslogin_nss_cache[1972]: oslogin_cache_refresh[1972]: Refreshing group entry cache May 27 17:45:20.520367 oslogin_cache_refresh[1972]: Failure getting users, quitting May 27 17:45:20.520392 oslogin_cache_refresh[1972]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:45:20.520455 oslogin_cache_refresh[1972]: Refreshing group entry cache May 27 17:45:20.522292 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:45:20.522608 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:45:20.540643 google_oslogin_nss_cache[1972]: oslogin_cache_refresh[1972]: Failure getting groups, quitting May 27 17:45:20.540643 google_oslogin_nss_cache[1972]: oslogin_cache_refresh[1972]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:45:20.538479 oslogin_cache_refresh[1972]: Failure getting groups, quitting May 27 17:45:20.538498 oslogin_cache_refresh[1972]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:45:20.561574 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:45:20.562821 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:45:20.576817 extend-filesystems[1971]: Found loop4 May 27 17:45:20.576817 extend-filesystems[1971]: Found loop5 May 27 17:45:20.576817 extend-filesystems[1971]: Found loop6 May 27 17:45:20.576817 extend-filesystems[1971]: Found loop7 May 27 17:45:20.576817 extend-filesystems[1971]: Found nvme0n1 May 27 17:45:20.576817 extend-filesystems[1971]: Found nvme0n1p1 May 27 17:45:20.576817 extend-filesystems[1971]: Found nvme0n1p2 May 27 17:45:20.576817 extend-filesystems[1971]: Found nvme0n1p3 May 27 17:45:20.576817 extend-filesystems[1971]: Found usr May 27 17:45:20.576817 extend-filesystems[1971]: Found nvme0n1p4 May 27 17:45:20.585273 extend-filesystems[1971]: Found nvme0n1p6 May 27 17:45:20.585273 extend-filesystems[1971]: Found nvme0n1p7 May 27 17:45:20.585273 extend-filesystems[1971]: Found nvme0n1p9 May 27 17:45:20.585273 extend-filesystems[1971]: Checking size of /dev/nvme0n1p9 May 27 17:45:20.602898 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:45:20.603224 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:45:20.621061 jq[1983]: true May 27 17:45:20.644488 tar[1986]: linux-amd64/LICENSE May 27 17:45:20.647050 tar[1986]: linux-amd64/helm May 27 17:45:20.667049 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 17:45:20.677712 extend-filesystems[1971]: Resized partition /dev/nvme0n1p9 May 27 17:45:20.674708 (ntainerd)[2001]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:45:20.687247 ntpd[1974]: ntpd 4.2.8p17@1.4004-o Tue May 27 14:54:35 UTC 2025 (1): Starting May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: ntpd 4.2.8p17@1.4004-o Tue May 27 14:54:35 UTC 2025 (1): Starting May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: ---------------------------------------------------- May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: ntp-4 is maintained by Network Time Foundation, May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: corporation. Support and training for ntp-4 are May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: available at https://www.nwtime.org/support May 27 17:45:20.694623 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: ---------------------------------------------------- May 27 17:45:20.694999 update_engine[1981]: I20250527 17:45:20.689337 1981 main.cc:92] Flatcar Update Engine starting May 27 17:45:20.687282 ntpd[1974]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 17:45:20.687293 ntpd[1974]: ---------------------------------------------------- May 27 17:45:20.687303 ntpd[1974]: ntp-4 is maintained by Network Time Foundation, May 27 17:45:20.687313 ntpd[1974]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 17:45:20.687323 ntpd[1974]: corporation. Support and training for ntp-4 are May 27 17:45:20.687333 ntpd[1974]: available at https://www.nwtime.org/support May 27 17:45:20.687342 ntpd[1974]: ---------------------------------------------------- May 27 17:45:20.722234 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 27 17:45:20.722326 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: proto: precision = 0.091 usec (-23) May 27 17:45:20.722326 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: basedate set to 2025-05-15 May 27 17:45:20.722326 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: gps base set to 2025-05-18 (week 2367) May 27 17:45:20.716464 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:45:20.722541 extend-filesystems[2021]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:45:20.728821 coreos-metadata[1967]: May 27 17:45:20.712 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 17:45:20.713658 ntpd[1974]: proto: precision = 0.091 usec (-23) May 27 17:45:20.724461 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:45:20.716219 dbus-daemon[1968]: [system] SELinux support is enabled May 27 17:45:20.738636 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Listen and drop on 0 v6wildcard [::]:123 May 27 17:45:20.738636 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 17:45:20.724499 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:45:20.738808 jq[2011]: true May 27 17:45:20.739021 coreos-metadata[1967]: May 27 17:45:20.729 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 27 17:45:20.739021 coreos-metadata[1967]: May 27 17:45:20.730 INFO Fetch successful May 27 17:45:20.739021 coreos-metadata[1967]: May 27 17:45:20.730 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 27 17:45:20.739021 coreos-metadata[1967]: May 27 17:45:20.731 INFO Fetch successful May 27 17:45:20.739021 coreos-metadata[1967]: May 27 17:45:20.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 27 17:45:20.718321 ntpd[1974]: basedate set to 2025-05-15 May 27 17:45:20.725544 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:45:20.718344 ntpd[1974]: gps base set to 2025-05-18 (week 2367) May 27 17:45:20.725569 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:45:20.736368 ntpd[1974]: Listen and drop on 0 v6wildcard [::]:123 May 27 17:45:20.736433 ntpd[1974]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 17:45:20.746771 coreos-metadata[1967]: May 27 17:45:20.739 INFO Fetch successful May 27 17:45:20.746771 coreos-metadata[1967]: May 27 17:45:20.739 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Listen normally on 2 lo 127.0.0.1:123 May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Listen normally on 3 eth0 172.31.23.101:123 May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Listen normally on 4 lo [::1]:123 May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: bind(21) AF_INET6 fe80::438:feff:fed2:d323%2#123 flags 0x11 failed: Cannot assign requested address May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: unable to create socket on eth0 (5) for fe80::438:feff:fed2:d323%2#123 May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: failed to init interface for address fe80::438:feff:fed2:d323%2 May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: Listening on routing socket on fd #21 for interface updates May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:45:20.746959 ntpd[1974]: 27 May 17:45:20 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:45:20.743243 ntpd[1974]: Listen normally on 2 lo 127.0.0.1:123 May 27 17:45:20.755258 coreos-metadata[1967]: May 27 17:45:20.749 INFO Fetch successful May 27 17:45:20.755258 coreos-metadata[1967]: May 27 17:45:20.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 27 17:45:20.755258 coreos-metadata[1967]: May 27 17:45:20.750 INFO Fetch failed with 404: resource not found May 27 17:45:20.755258 coreos-metadata[1967]: May 27 17:45:20.750 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 27 17:45:20.755258 coreos-metadata[1967]: May 27 17:45:20.752 INFO Fetch successful May 27 17:45:20.755258 coreos-metadata[1967]: May 27 17:45:20.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 27 17:45:20.743296 ntpd[1974]: Listen normally on 3 eth0 172.31.23.101:123 May 27 17:45:20.743337 ntpd[1974]: Listen normally on 4 lo [::1]:123 May 27 17:45:20.743394 ntpd[1974]: bind(21) AF_INET6 fe80::438:feff:fed2:d323%2#123 flags 0x11 failed: Cannot assign requested address May 27 17:45:20.743415 ntpd[1974]: unable to create socket on eth0 (5) for fe80::438:feff:fed2:d323%2#123 May 27 17:45:20.743431 ntpd[1974]: failed to init interface for address fe80::438:feff:fed2:d323%2 May 27 17:45:20.743465 ntpd[1974]: Listening on routing socket on fd #21 for interface updates May 27 17:45:20.745100 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:45:20.745133 ntpd[1974]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:45:20.759396 coreos-metadata[1967]: May 27 17:45:20.757 INFO Fetch successful May 27 17:45:20.759396 coreos-metadata[1967]: May 27 17:45:20.757 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 27 17:45:20.762781 coreos-metadata[1967]: May 27 17:45:20.761 INFO Fetch successful May 27 17:45:20.762781 coreos-metadata[1967]: May 27 17:45:20.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 27 17:45:20.762491 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 17:45:20.765691 coreos-metadata[1967]: May 27 17:45:20.763 INFO Fetch successful May 27 17:45:20.765691 coreos-metadata[1967]: May 27 17:45:20.763 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 27 17:45:20.768373 coreos-metadata[1967]: May 27 17:45:20.766 INFO Fetch successful May 27 17:45:20.769207 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 17:45:20.769667 update_engine[1981]: I20250527 17:45:20.769608 1981 update_check_scheduler.cc:74] Next update check in 10m42s May 27 17:45:20.770123 systemd[1]: Started update-engine.service - Update Engine. May 27 17:45:20.776912 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:45:20.825219 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 27 17:45:20.858249 extend-filesystems[2021]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 27 17:45:20.858249 extend-filesystems[2021]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:45:20.858249 extend-filesystems[2021]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 27 17:45:20.849983 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:45:20.864615 extend-filesystems[1971]: Resized filesystem in /dev/nvme0n1p9 May 27 17:45:20.850278 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:45:20.919439 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:45:20.921712 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:45:20.928774 systemd-logind[1979]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:45:20.928808 systemd-logind[1979]: Watching system buttons on /dev/input/event3 (Sleep Button) May 27 17:45:20.928831 systemd-logind[1979]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:45:20.931400 systemd-logind[1979]: New seat seat0. May 27 17:45:20.943334 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:45:20.984300 bash[2053]: Updated "/home/core/.ssh/authorized_keys" May 27 17:45:20.983985 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:45:21.003580 systemd[1]: Starting sshkeys.service... May 27 17:45:21.153847 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 17:45:21.160477 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 17:45:21.286625 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 17:45:21.323465 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 17:45:21.330763 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 17:45:21.348650 systemd[1]: Starting polkit.service - Authorization Manager... May 27 17:45:21.444377 systemd-networkd[1817]: eth0: Gained IPv6LL May 27 17:45:21.456023 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:45:21.457475 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:45:21.465441 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 27 17:45:21.473457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:45:21.478054 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:45:21.565226 coreos-metadata[2127]: May 27 17:45:21.564 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 17:45:21.580527 coreos-metadata[2127]: May 27 17:45:21.576 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 27 17:45:21.580527 coreos-metadata[2127]: May 27 17:45:21.580 INFO Fetch successful May 27 17:45:21.580527 coreos-metadata[2127]: May 27 17:45:21.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 17:45:21.586698 coreos-metadata[2127]: May 27 17:45:21.585 INFO Fetch successful May 27 17:45:21.591680 unknown[2127]: wrote ssh authorized keys file for user: core May 27 17:45:21.618474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:45:21.625785 locksmithd[2028]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:45:21.651273 update-ssh-keys[2169]: Updated "/home/core/.ssh/authorized_keys" May 27 17:45:21.652000 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 17:45:21.664524 systemd[1]: Finished sshkeys.service. May 27 17:45:21.728861 polkitd[2151]: Started polkitd version 126 May 27 17:45:21.737182 polkitd[2151]: Loading rules from directory /etc/polkit-1/rules.d May 27 17:45:21.739817 polkitd[2151]: Loading rules from directory /run/polkit-1/rules.d May 27 17:45:21.739880 polkitd[2151]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 17:45:21.741376 polkitd[2151]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 17:45:21.741433 polkitd[2151]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 17:45:21.741490 polkitd[2151]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 17:45:21.743848 polkitd[2151]: Finished loading, compiling and executing 2 rules May 27 17:45:21.744191 systemd[1]: Started polkit.service - Authorization Manager. May 27 17:45:21.748439 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 17:45:21.750943 polkitd[2151]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 17:45:21.763525 amazon-ssm-agent[2161]: Initializing new seelog logger May 27 17:45:21.765252 amazon-ssm-agent[2161]: New Seelog Logger Creation Complete May 27 17:45:21.765399 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.765399 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.766977 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 processing appconfig overrides May 27 17:45:21.769708 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.769708 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.769840 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 processing appconfig overrides May 27 17:45:21.770074 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.770074 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.771221 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.7689 INFO Proxy environment variables: May 27 17:45:21.771291 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 processing appconfig overrides May 27 17:45:21.775396 containerd[2001]: time="2025-05-27T17:45:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:45:21.777230 containerd[2001]: time="2025-05-27T17:45:21.776082616Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:45:21.783092 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.783585 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:21.783585 amazon-ssm-agent[2161]: 2025/05/27 17:45:21 processing appconfig overrides May 27 17:45:21.784751 systemd-hostnamed[2027]: Hostname set to (transient) May 27 17:45:21.784899 systemd-resolved[1818]: System hostname changed to 'ip-172-31-23-101'. May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829524552Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.666µs" May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829569070Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829594023Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829784411Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829801317Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829831640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829906533Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:45:21.830836 containerd[2001]: time="2025-05-27T17:45:21.829922150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:45:21.831305 containerd[2001]: time="2025-05-27T17:45:21.831263088Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:45:21.831383 containerd[2001]: time="2025-05-27T17:45:21.831368271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:45:21.831452 containerd[2001]: time="2025-05-27T17:45:21.831438055Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:45:21.831528 containerd[2001]: time="2025-05-27T17:45:21.831514267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:45:21.831712 containerd[2001]: time="2025-05-27T17:45:21.831694821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:45:21.832569 containerd[2001]: time="2025-05-27T17:45:21.832539834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:45:21.836387 containerd[2001]: time="2025-05-27T17:45:21.834251649Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:45:21.836387 containerd[2001]: time="2025-05-27T17:45:21.834277761Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:45:21.836387 containerd[2001]: time="2025-05-27T17:45:21.834330746Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:45:21.836387 containerd[2001]: time="2025-05-27T17:45:21.835266341Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:45:21.836387 containerd[2001]: time="2025-05-27T17:45:21.835364767Z" level=info msg="metadata content store policy set" policy=shared May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844464897Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844545334Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844567306Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844584197Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844601275Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844617289Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844637938Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844656107Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844683536Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844698159Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844711579Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844728910Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844883036Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:45:21.845228 containerd[2001]: time="2025-05-27T17:45:21.844905702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.844925560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.844944091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.844960005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.844976577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.844995521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.845008601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.845025304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.845040729Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.845057135Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.845143418Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:45:21.845777 containerd[2001]: time="2025-05-27T17:45:21.845162160Z" level=info msg="Start snapshots syncer" May 27 17:45:21.846328 containerd[2001]: time="2025-05-27T17:45:21.845192308Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:45:21.847776 containerd[2001]: time="2025-05-27T17:45:21.847653065Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:45:21.847776 containerd[2001]: time="2025-05-27T17:45:21.847747519Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849232275Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849460885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849494943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849526515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849543194Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849562298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849578706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849608552Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849643774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849679034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849695221Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849762203Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849784462Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:45:21.850049 containerd[2001]: time="2025-05-27T17:45:21.849797466Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.849870298Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.849899937Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.849914828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.849930910Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.849952307Z" level=info msg="runtime interface created" May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.849974167Z" level=info msg="created NRI interface" May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.849987785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.850006092Z" level=info msg="Connect containerd service" May 27 17:45:21.850664 containerd[2001]: time="2025-05-27T17:45:21.850231835Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:45:21.855320 containerd[2001]: time="2025-05-27T17:45:21.854416151Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:45:21.872218 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.7690 INFO http_proxy: May 27 17:45:21.966839 sshd_keygen[2020]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:45:21.973477 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.7690 INFO no_proxy: May 27 17:45:22.029746 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:45:22.035330 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:45:22.062870 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:45:22.066890 systemd[1]: Started sshd@0-172.31.23.101:22-139.178.68.195:52038.service - OpenSSH per-connection server daemon (139.178.68.195:52038). May 27 17:45:22.075887 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.7690 INFO https_proxy: May 27 17:45:22.083848 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:45:22.084913 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:45:22.097696 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:45:22.167331 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:45:22.173708 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:45:22.177233 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.7697 INFO Checking if agent identity type OnPrem can be assumed May 27 17:45:22.179759 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:45:22.181146 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:45:22.281074 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.7699 INFO Checking if agent identity type EC2 can be assumed May 27 17:45:22.339074 containerd[2001]: time="2025-05-27T17:45:22.339012191Z" level=info msg="Start subscribing containerd event" May 27 17:45:22.343128 containerd[2001]: time="2025-05-27T17:45:22.339264411Z" level=info msg="Start recovering state" May 27 17:45:22.343128 containerd[2001]: time="2025-05-27T17:45:22.341263288Z" level=info msg="Start event monitor" May 27 17:45:22.343128 containerd[2001]: time="2025-05-27T17:45:22.342495232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:45:22.343128 containerd[2001]: time="2025-05-27T17:45:22.342565265Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:45:22.343426 containerd[2001]: time="2025-05-27T17:45:22.343405732Z" level=info msg="Start cni network conf syncer for default" May 27 17:45:22.343494 containerd[2001]: time="2025-05-27T17:45:22.343480747Z" level=info msg="Start streaming server" May 27 17:45:22.343555 containerd[2001]: time="2025-05-27T17:45:22.343544290Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:45:22.343612 containerd[2001]: time="2025-05-27T17:45:22.343602535Z" level=info msg="runtime interface starting up..." May 27 17:45:22.343660 containerd[2001]: time="2025-05-27T17:45:22.343651622Z" level=info msg="starting plugins..." May 27 17:45:22.343730 containerd[2001]: time="2025-05-27T17:45:22.343717721Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:45:22.344051 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:45:22.345274 containerd[2001]: time="2025-05-27T17:45:22.345248149Z" level=info msg="containerd successfully booted in 0.570767s" May 27 17:45:22.380267 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.9931 INFO Agent will take identity from EC2 May 27 17:45:22.391360 sshd[2209]: Accepted publickey for core from 139.178.68.195 port 52038 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:22.400703 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:22.409633 tar[1986]: linux-amd64/README.md May 27 17:45:22.426248 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:45:22.428552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:45:22.431142 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:45:22.452546 systemd-logind[1979]: New session 1 of user core. May 27 17:45:22.467659 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:45:22.473118 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:45:22.479694 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.9998 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 May 27 17:45:22.489630 (systemd)[2230]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:45:22.493705 systemd-logind[1979]: New session c1 of user core. May 27 17:45:22.497857 amazon-ssm-agent[2161]: 2025/05/27 17:45:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:22.498043 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:45:22.498157 amazon-ssm-agent[2161]: 2025/05/27 17:45:22 processing appconfig overrides May 27 17:45:22.528729 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.9998 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 27 17:45:22.528729 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.9998 INFO [amazon-ssm-agent] Starting Core Agent May 27 17:45:22.528729 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.9998 INFO [amazon-ssm-agent] Registrar detected. Attempting registration May 27 17:45:22.528729 amazon-ssm-agent[2161]: 2025-05-27 17:45:21.9998 INFO [Registrar] Starting registrar module May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.0035 INFO [EC2Identity] Checking disk for registration info May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.0035 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.0035 INFO [EC2Identity] Generating registration keypair May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.4471 INFO [EC2Identity] Checking write access before registering May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.4476 INFO [EC2Identity] Registering EC2 instance with Systems Manager May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.4976 INFO [EC2Identity] EC2 registration was successful. May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.4977 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.4977 INFO [CredentialRefresher] credentialRefresher has started May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.4977 INFO [CredentialRefresher] Starting credentials refresher loop May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.5284 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 27 17:45:22.528951 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.5286 INFO [CredentialRefresher] Credentials ready May 27 17:45:22.579484 amazon-ssm-agent[2161]: 2025-05-27 17:45:22.5288 INFO [CredentialRefresher] Next credential rotation will be in 29.999992855316666 minutes May 27 17:45:22.687788 systemd[2230]: Queued start job for default target default.target. May 27 17:45:22.700392 systemd[2230]: Created slice app.slice - User Application Slice. May 27 17:45:22.700428 systemd[2230]: Reached target paths.target - Paths. May 27 17:45:22.700474 systemd[2230]: Reached target timers.target - Timers. May 27 17:45:22.701888 systemd[2230]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:45:22.714685 systemd[2230]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:45:22.714804 systemd[2230]: Reached target sockets.target - Sockets. May 27 17:45:22.714857 systemd[2230]: Reached target basic.target - Basic System. May 27 17:45:22.714893 systemd[2230]: Reached target default.target - Main User Target. May 27 17:45:22.714923 systemd[2230]: Startup finished in 211ms. May 27 17:45:22.714976 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:45:22.725466 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:45:22.870277 systemd[1]: Started sshd@1-172.31.23.101:22-139.178.68.195:52040.service - OpenSSH per-connection server daemon (139.178.68.195:52040). May 27 17:45:23.046163 sshd[2241]: Accepted publickey for core from 139.178.68.195 port 52040 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:23.047748 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:23.055843 systemd-logind[1979]: New session 2 of user core. May 27 17:45:23.059775 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:45:23.178948 sshd[2243]: Connection closed by 139.178.68.195 port 52040 May 27 17:45:23.180223 sshd-session[2241]: pam_unix(sshd:session): session closed for user core May 27 17:45:23.183881 systemd[1]: sshd@1-172.31.23.101:22-139.178.68.195:52040.service: Deactivated successfully. May 27 17:45:23.186268 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:45:23.188942 systemd-logind[1979]: Session 2 logged out. Waiting for processes to exit. May 27 17:45:23.190143 systemd-logind[1979]: Removed session 2. May 27 17:45:23.210165 systemd[1]: Started sshd@2-172.31.23.101:22-139.178.68.195:54708.service - OpenSSH per-connection server daemon (139.178.68.195:54708). May 27 17:45:23.386902 sshd[2249]: Accepted publickey for core from 139.178.68.195 port 54708 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:23.389078 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:23.397371 systemd-logind[1979]: New session 3 of user core. May 27 17:45:23.400418 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:45:23.524482 sshd[2251]: Connection closed by 139.178.68.195 port 54708 May 27 17:45:23.525316 sshd-session[2249]: pam_unix(sshd:session): session closed for user core May 27 17:45:23.529162 systemd[1]: sshd@2-172.31.23.101:22-139.178.68.195:54708.service: Deactivated successfully. May 27 17:45:23.531006 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:45:23.532439 systemd-logind[1979]: Session 3 logged out. Waiting for processes to exit. May 27 17:45:23.534834 systemd-logind[1979]: Removed session 3. May 27 17:45:23.543882 amazon-ssm-agent[2161]: 2025-05-27 17:45:23.5437 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 27 17:45:23.645783 amazon-ssm-agent[2161]: 2025-05-27 17:45:23.5455 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2258) started May 27 17:45:23.687723 ntpd[1974]: Listen normally on 6 eth0 [fe80::438:feff:fed2:d323%2]:123 May 27 17:45:23.688122 ntpd[1974]: 27 May 17:45:23 ntpd[1974]: Listen normally on 6 eth0 [fe80::438:feff:fed2:d323%2]:123 May 27 17:45:23.746098 amazon-ssm-agent[2161]: 2025-05-27 17:45:23.5456 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 27 17:45:25.442879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:45:25.444783 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:45:25.448299 systemd[1]: Startup finished in 2.880s (kernel) + 7.952s (initrd) + 9.874s (userspace) = 20.707s. May 27 17:45:25.456221 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:45:27.508702 kubelet[2275]: E0527 17:45:27.508618 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:45:27.512627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:45:27.512849 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:45:27.513500 systemd[1]: kubelet.service: Consumed 1.089s CPU time, 268.5M memory peak. May 27 17:45:28.182463 systemd-resolved[1818]: Clock change detected. Flushing caches. May 27 17:45:34.059632 systemd[1]: Started sshd@3-172.31.23.101:22-139.178.68.195:45106.service - OpenSSH per-connection server daemon (139.178.68.195:45106). May 27 17:45:34.238019 sshd[2287]: Accepted publickey for core from 139.178.68.195 port 45106 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:34.239743 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:34.246238 systemd-logind[1979]: New session 4 of user core. May 27 17:45:34.255714 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:45:34.374478 sshd[2289]: Connection closed by 139.178.68.195 port 45106 May 27 17:45:34.375272 sshd-session[2287]: pam_unix(sshd:session): session closed for user core May 27 17:45:34.379143 systemd[1]: sshd@3-172.31.23.101:22-139.178.68.195:45106.service: Deactivated successfully. May 27 17:45:34.380996 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:45:34.382147 systemd-logind[1979]: Session 4 logged out. Waiting for processes to exit. May 27 17:45:34.383640 systemd-logind[1979]: Removed session 4. May 27 17:45:34.406679 systemd[1]: Started sshd@4-172.31.23.101:22-139.178.68.195:45116.service - OpenSSH per-connection server daemon (139.178.68.195:45116). May 27 17:45:34.580764 sshd[2295]: Accepted publickey for core from 139.178.68.195 port 45116 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:34.582255 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:34.588163 systemd-logind[1979]: New session 5 of user core. May 27 17:45:34.593655 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:45:34.711308 sshd[2297]: Connection closed by 139.178.68.195 port 45116 May 27 17:45:34.712062 sshd-session[2295]: pam_unix(sshd:session): session closed for user core May 27 17:45:34.716117 systemd[1]: sshd@4-172.31.23.101:22-139.178.68.195:45116.service: Deactivated successfully. May 27 17:45:34.717877 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:45:34.718769 systemd-logind[1979]: Session 5 logged out. Waiting for processes to exit. May 27 17:45:34.720123 systemd-logind[1979]: Removed session 5. May 27 17:45:34.745513 systemd[1]: Started sshd@5-172.31.23.101:22-139.178.68.195:45130.service - OpenSSH per-connection server daemon (139.178.68.195:45130). May 27 17:45:34.927319 sshd[2303]: Accepted publickey for core from 139.178.68.195 port 45130 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:34.928817 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:34.935498 systemd-logind[1979]: New session 6 of user core. May 27 17:45:34.944726 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:45:35.064493 sshd[2305]: Connection closed by 139.178.68.195 port 45130 May 27 17:45:35.065739 sshd-session[2303]: pam_unix(sshd:session): session closed for user core May 27 17:45:35.070307 systemd[1]: sshd@5-172.31.23.101:22-139.178.68.195:45130.service: Deactivated successfully. May 27 17:45:35.072354 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:45:35.073267 systemd-logind[1979]: Session 6 logged out. Waiting for processes to exit. May 27 17:45:35.075187 systemd-logind[1979]: Removed session 6. May 27 17:45:35.098276 systemd[1]: Started sshd@6-172.31.23.101:22-139.178.68.195:45146.service - OpenSSH per-connection server daemon (139.178.68.195:45146). May 27 17:45:35.274053 sshd[2311]: Accepted publickey for core from 139.178.68.195 port 45146 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:35.275404 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:35.280845 systemd-logind[1979]: New session 7 of user core. May 27 17:45:35.287686 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:45:35.400850 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:45:35.401127 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:35.418961 sudo[2314]: pam_unix(sudo:session): session closed for user root May 27 17:45:35.441695 sshd[2313]: Connection closed by 139.178.68.195 port 45146 May 27 17:45:35.442414 sshd-session[2311]: pam_unix(sshd:session): session closed for user core May 27 17:45:35.446724 systemd[1]: sshd@6-172.31.23.101:22-139.178.68.195:45146.service: Deactivated successfully. May 27 17:45:35.448526 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:45:35.449588 systemd-logind[1979]: Session 7 logged out. Waiting for processes to exit. May 27 17:45:35.450859 systemd-logind[1979]: Removed session 7. May 27 17:45:35.474714 systemd[1]: Started sshd@7-172.31.23.101:22-139.178.68.195:45158.service - OpenSSH per-connection server daemon (139.178.68.195:45158). May 27 17:45:35.646898 sshd[2320]: Accepted publickey for core from 139.178.68.195 port 45158 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:35.648376 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:35.653539 systemd-logind[1979]: New session 8 of user core. May 27 17:45:35.661696 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:45:35.759674 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:45:35.759942 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:35.765016 sudo[2324]: pam_unix(sudo:session): session closed for user root May 27 17:45:35.770887 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:45:35.771280 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:35.781771 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:45:35.833992 augenrules[2346]: No rules May 27 17:45:35.835417 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:45:35.835783 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:45:35.837671 sudo[2323]: pam_unix(sudo:session): session closed for user root May 27 17:45:35.860495 sshd[2322]: Connection closed by 139.178.68.195 port 45158 May 27 17:45:35.861220 sshd-session[2320]: pam_unix(sshd:session): session closed for user core May 27 17:45:35.865829 systemd[1]: sshd@7-172.31.23.101:22-139.178.68.195:45158.service: Deactivated successfully. May 27 17:45:35.867832 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:45:35.868911 systemd-logind[1979]: Session 8 logged out. Waiting for processes to exit. May 27 17:45:35.870685 systemd-logind[1979]: Removed session 8. May 27 17:45:35.894334 systemd[1]: Started sshd@8-172.31.23.101:22-139.178.68.195:45160.service - OpenSSH per-connection server daemon (139.178.68.195:45160). May 27 17:45:36.072871 sshd[2355]: Accepted publickey for core from 139.178.68.195 port 45160 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:45:36.074560 sshd-session[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:36.080499 systemd-logind[1979]: New session 9 of user core. May 27 17:45:36.087702 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:45:36.188144 sudo[2358]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:45:36.188541 sudo[2358]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:36.732898 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:45:36.751092 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:45:37.040329 dockerd[2377]: time="2025-05-27T17:45:37.040195074Z" level=info msg="Starting up" May 27 17:45:37.042940 dockerd[2377]: time="2025-05-27T17:45:37.042894869Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:45:37.075698 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3586095157-merged.mount: Deactivated successfully. May 27 17:45:37.120177 dockerd[2377]: time="2025-05-27T17:45:37.119850079Z" level=info msg="Loading containers: start." May 27 17:45:37.135454 kernel: Initializing XFRM netlink socket May 27 17:45:37.368354 (udev-worker)[2399]: Network interface NamePolicy= disabled on kernel command line. May 27 17:45:37.422870 systemd-networkd[1817]: docker0: Link UP May 27 17:45:37.430787 dockerd[2377]: time="2025-05-27T17:45:37.430718616Z" level=info msg="Loading containers: done." May 27 17:45:37.449180 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3210257401-merged.mount: Deactivated successfully. May 27 17:45:37.456052 dockerd[2377]: time="2025-05-27T17:45:37.455987075Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:45:37.456254 dockerd[2377]: time="2025-05-27T17:45:37.456090721Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:45:37.456254 dockerd[2377]: time="2025-05-27T17:45:37.456247235Z" level=info msg="Initializing buildkit" May 27 17:45:37.486691 dockerd[2377]: time="2025-05-27T17:45:37.486409782Z" level=info msg="Completed buildkit initialization" May 27 17:45:37.496300 dockerd[2377]: time="2025-05-27T17:45:37.496235092Z" level=info msg="Daemon has completed initialization" May 27 17:45:37.496512 dockerd[2377]: time="2025-05-27T17:45:37.496464127Z" level=info msg="API listen on /run/docker.sock" May 27 17:45:37.496812 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:45:38.257901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:45:38.259784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:45:38.570618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:45:38.580006 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:45:38.639761 kubelet[2586]: E0527 17:45:38.639687 2586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:45:38.645971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:45:38.646185 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:45:38.647508 systemd[1]: kubelet.service: Consumed 191ms CPU time, 110.2M memory peak. May 27 17:45:39.337146 containerd[2001]: time="2025-05-27T17:45:39.337106037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 17:45:39.910183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806538770.mount: Deactivated successfully. May 27 17:45:41.257789 containerd[2001]: time="2025-05-27T17:45:41.257736977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:41.259076 containerd[2001]: time="2025-05-27T17:45:41.259024194Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 17:45:41.260362 containerd[2001]: time="2025-05-27T17:45:41.260307093Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:41.262718 containerd[2001]: time="2025-05-27T17:45:41.262660073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:41.264196 containerd[2001]: time="2025-05-27T17:45:41.263420586Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.926274226s" May 27 17:45:41.264196 containerd[2001]: time="2025-05-27T17:45:41.263475754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 17:45:41.264508 containerd[2001]: time="2025-05-27T17:45:41.264479699Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 17:45:42.797467 containerd[2001]: time="2025-05-27T17:45:42.797394250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:42.798604 containerd[2001]: time="2025-05-27T17:45:42.798550466Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 17:45:42.799981 containerd[2001]: time="2025-05-27T17:45:42.799912311Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:42.803458 containerd[2001]: time="2025-05-27T17:45:42.803207935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:42.804071 containerd[2001]: time="2025-05-27T17:45:42.803922308Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.539412585s" May 27 17:45:42.804071 containerd[2001]: time="2025-05-27T17:45:42.803956132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 17:45:42.804536 containerd[2001]: time="2025-05-27T17:45:42.804506366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 17:45:44.215793 containerd[2001]: time="2025-05-27T17:45:44.215720333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:44.216995 containerd[2001]: time="2025-05-27T17:45:44.216739005Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 17:45:44.218048 containerd[2001]: time="2025-05-27T17:45:44.218002013Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:44.220964 containerd[2001]: time="2025-05-27T17:45:44.220922830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:44.222446 containerd[2001]: time="2025-05-27T17:45:44.221953928Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.417413774s" May 27 17:45:44.222446 containerd[2001]: time="2025-05-27T17:45:44.221996436Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 17:45:44.222735 containerd[2001]: time="2025-05-27T17:45:44.222700467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 17:45:45.270157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365191383.mount: Deactivated successfully. May 27 17:45:45.886553 containerd[2001]: time="2025-05-27T17:45:45.886496103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:45.888666 containerd[2001]: time="2025-05-27T17:45:45.888610841Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 17:45:45.891129 containerd[2001]: time="2025-05-27T17:45:45.891065088Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:45.894331 containerd[2001]: time="2025-05-27T17:45:45.894248954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:45.894910 containerd[2001]: time="2025-05-27T17:45:45.894755502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.672022594s" May 27 17:45:45.894910 containerd[2001]: time="2025-05-27T17:45:45.894789173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 17:45:45.895156 containerd[2001]: time="2025-05-27T17:45:45.895135591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 17:45:46.472246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020456060.mount: Deactivated successfully. May 27 17:45:47.548515 containerd[2001]: time="2025-05-27T17:45:47.548444891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:47.549678 containerd[2001]: time="2025-05-27T17:45:47.549583051Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 17:45:47.550892 containerd[2001]: time="2025-05-27T17:45:47.550863578Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:47.554583 containerd[2001]: time="2025-05-27T17:45:47.553655160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:47.554583 containerd[2001]: time="2025-05-27T17:45:47.554461613Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.659297299s" May 27 17:45:47.554583 containerd[2001]: time="2025-05-27T17:45:47.554492165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 17:45:47.555417 containerd[2001]: time="2025-05-27T17:45:47.555388530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:45:48.014478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747079385.mount: Deactivated successfully. May 27 17:45:48.022314 containerd[2001]: time="2025-05-27T17:45:48.022261361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:45:48.023454 containerd[2001]: time="2025-05-27T17:45:48.023240396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 17:45:48.024697 containerd[2001]: time="2025-05-27T17:45:48.024660249Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:45:48.027480 containerd[2001]: time="2025-05-27T17:45:48.027020436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:45:48.028224 containerd[2001]: time="2025-05-27T17:45:48.027607940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.192868ms" May 27 17:45:48.028224 containerd[2001]: time="2025-05-27T17:45:48.027643032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:45:48.028515 containerd[2001]: time="2025-05-27T17:45:48.028486329Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 17:45:48.789787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:45:48.792618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:45:49.146312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:45:49.156928 (kubelet)[2734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:45:49.246263 kubelet[2734]: E0527 17:45:49.246208 2734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:45:49.249815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:45:49.250031 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:45:49.250827 systemd[1]: kubelet.service: Consumed 196ms CPU time, 109.4M memory peak. May 27 17:45:51.104220 containerd[2001]: time="2025-05-27T17:45:51.104131683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:51.112759 containerd[2001]: time="2025-05-27T17:45:51.112462907Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 17:45:51.117362 containerd[2001]: time="2025-05-27T17:45:51.117310330Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:51.125782 containerd[2001]: time="2025-05-27T17:45:51.125732157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:51.127563 containerd[2001]: time="2025-05-27T17:45:51.126949530Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.098435349s" May 27 17:45:51.127563 containerd[2001]: time="2025-05-27T17:45:51.126988260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 17:45:52.294415 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 17:45:56.741089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:45:56.741369 systemd[1]: kubelet.service: Consumed 196ms CPU time, 109.4M memory peak. May 27 17:45:56.744259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:45:56.779643 systemd[1]: Reload requested from client PID 2779 ('systemctl') (unit session-9.scope)... May 27 17:45:56.779665 systemd[1]: Reloading... May 27 17:45:56.956470 zram_generator::config[2826]: No configuration found. May 27 17:45:57.086758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:45:57.223087 systemd[1]: Reloading finished in 442 ms. May 27 17:45:57.285308 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:45:57.285399 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:45:57.285943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:45:57.286100 systemd[1]: kubelet.service: Consumed 143ms CPU time, 98.1M memory peak. May 27 17:45:57.288256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:45:57.670403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:45:57.681006 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:45:57.747455 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:45:57.747455 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:45:57.747455 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:45:57.747455 kubelet[2886]: I0527 17:45:57.746592 2886 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:45:58.230417 kubelet[2886]: I0527 17:45:58.230367 2886 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:45:58.230417 kubelet[2886]: I0527 17:45:58.230403 2886 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:45:58.230778 kubelet[2886]: I0527 17:45:58.230755 2886 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:45:58.304214 kubelet[2886]: I0527 17:45:58.304084 2886 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:45:58.307715 kubelet[2886]: E0527 17:45:58.307305 2886 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:45:58.334928 kubelet[2886]: I0527 17:45:58.334887 2886 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:45:58.348713 kubelet[2886]: I0527 17:45:58.348483 2886 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:45:58.352837 kubelet[2886]: I0527 17:45:58.352764 2886 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:45:58.357455 kubelet[2886]: I0527 17:45:58.352833 2886 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-101","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:45:58.360851 kubelet[2886]: I0527 17:45:58.360798 2886 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:45:58.360851 kubelet[2886]: I0527 17:45:58.360842 2886 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:45:58.361101 kubelet[2886]: I0527 17:45:58.360989 2886 state_mem.go:36] "Initialized new in-memory state store" May 27 17:45:58.367058 kubelet[2886]: I0527 17:45:58.366731 2886 kubelet.go:480] "Attempting to sync node with API server" May 27 17:45:58.367058 kubelet[2886]: I0527 17:45:58.366785 2886 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:45:58.367058 kubelet[2886]: I0527 17:45:58.366821 2886 kubelet.go:386] "Adding apiserver pod source" May 27 17:45:58.371030 kubelet[2886]: E0527 17:45:58.370986 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-101&limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:45:58.371412 kubelet[2886]: I0527 17:45:58.371207 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:45:58.381869 kubelet[2886]: E0527 17:45:58.380954 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:45:58.381869 kubelet[2886]: I0527 17:45:58.381308 2886 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:45:58.381869 kubelet[2886]: I0527 17:45:58.381801 2886 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:45:58.383415 kubelet[2886]: W0527 17:45:58.383375 2886 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:45:58.387005 kubelet[2886]: I0527 17:45:58.386947 2886 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:45:58.387005 kubelet[2886]: I0527 17:45:58.387012 2886 server.go:1289] "Started kubelet" May 27 17:45:58.390719 kubelet[2886]: I0527 17:45:58.389738 2886 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:45:58.393243 kubelet[2886]: I0527 17:45:58.393133 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:45:58.393636 kubelet[2886]: I0527 17:45:58.393610 2886 server.go:317] "Adding debug handlers to kubelet server" May 27 17:45:58.393930 kubelet[2886]: I0527 17:45:58.393914 2886 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:45:58.405411 kubelet[2886]: I0527 17:45:58.405388 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:45:58.407907 kubelet[2886]: E0527 17:45:58.403855 2886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.101:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.101:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-101.184373651854a808 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-101,UID:ip-172-31-23-101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-101,},FirstTimestamp:2025-05-27 17:45:58.386976776 +0000 UTC m=+0.700654080,LastTimestamp:2025-05-27 17:45:58.386976776 +0000 UTC m=+0.700654080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-101,}" May 27 17:45:58.408478 kubelet[2886]: I0527 17:45:58.408460 2886 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:45:58.410595 kubelet[2886]: E0527 17:45:58.410576 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:45:58.410703 kubelet[2886]: I0527 17:45:58.410696 2886 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:45:58.410984 kubelet[2886]: I0527 17:45:58.410973 2886 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:45:58.411085 kubelet[2886]: I0527 17:45:58.411078 2886 reconciler.go:26] "Reconciler: start to sync state" May 27 17:45:58.411505 kubelet[2886]: E0527 17:45:58.411486 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:45:58.416479 kubelet[2886]: E0527 17:45:58.416446 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-101?timeout=10s\": dial tcp 172.31.23.101:6443: connect: connection refused" interval="200ms" May 27 17:45:58.426459 kubelet[2886]: I0527 17:45:58.425594 2886 factory.go:223] Registration of the systemd container factory successfully May 27 17:45:58.428503 kubelet[2886]: I0527 17:45:58.425711 2886 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:45:58.430854 kubelet[2886]: I0527 17:45:58.430823 2886 factory.go:223] Registration of the containerd container factory successfully May 27 17:45:58.442857 kubelet[2886]: I0527 17:45:58.442662 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:45:58.446520 kubelet[2886]: I0527 17:45:58.446478 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:45:58.446520 kubelet[2886]: I0527 17:45:58.446508 2886 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:45:58.446679 kubelet[2886]: I0527 17:45:58.446537 2886 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:45:58.446679 kubelet[2886]: I0527 17:45:58.446545 2886 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:45:58.446679 kubelet[2886]: E0527 17:45:58.446597 2886 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:45:58.454291 kubelet[2886]: E0527 17:45:58.454158 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:45:58.458709 kubelet[2886]: E0527 17:45:58.458676 2886 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:45:58.468166 kubelet[2886]: I0527 17:45:58.468130 2886 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:45:58.468310 kubelet[2886]: I0527 17:45:58.468149 2886 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:45:58.468310 kubelet[2886]: I0527 17:45:58.468272 2886 state_mem.go:36] "Initialized new in-memory state store" May 27 17:45:58.472405 kubelet[2886]: I0527 17:45:58.472366 2886 policy_none.go:49] "None policy: Start" May 27 17:45:58.472405 kubelet[2886]: I0527 17:45:58.472390 2886 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:45:58.472405 kubelet[2886]: I0527 17:45:58.472402 2886 state_mem.go:35] "Initializing new in-memory state store" May 27 17:45:58.480001 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:45:58.495341 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:45:58.501237 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:45:58.512674 kubelet[2886]: E0527 17:45:58.512641 2886 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:45:58.512913 kubelet[2886]: I0527 17:45:58.512875 2886 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:45:58.512913 kubelet[2886]: I0527 17:45:58.512892 2886 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:45:58.518003 kubelet[2886]: I0527 17:45:58.516998 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:45:58.518003 kubelet[2886]: E0527 17:45:58.517625 2886 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:45:58.518003 kubelet[2886]: E0527 17:45:58.517667 2886 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-101\" not found" May 27 17:45:58.560320 systemd[1]: Created slice kubepods-burstable-pod3c330fc28422bae0efc20464502555c3.slice - libcontainer container kubepods-burstable-pod3c330fc28422bae0efc20464502555c3.slice. May 27 17:45:58.575808 kubelet[2886]: E0527 17:45:58.575546 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:45:58.580441 systemd[1]: Created slice kubepods-burstable-podaea2301cee29518de662977cde6c47de.slice - libcontainer container kubepods-burstable-podaea2301cee29518de662977cde6c47de.slice. May 27 17:45:58.583318 kubelet[2886]: E0527 17:45:58.583284 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:45:58.597697 systemd[1]: Created slice kubepods-burstable-pod96344e2a91cea76acc74046951fca392.slice - libcontainer container kubepods-burstable-pod96344e2a91cea76acc74046951fca392.slice. May 27 17:45:58.600116 kubelet[2886]: E0527 17:45:58.600076 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:45:58.614653 kubelet[2886]: I0527 17:45:58.614484 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:45:58.614653 kubelet[2886]: I0527 17:45:58.614531 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c330fc28422bae0efc20464502555c3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-101\" (UID: \"3c330fc28422bae0efc20464502555c3\") " pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:45:58.614653 kubelet[2886]: I0527 17:45:58.614563 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:45:58.614653 kubelet[2886]: I0527 17:45:58.614585 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:45:58.614653 kubelet[2886]: I0527 17:45:58.614606 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96344e2a91cea76acc74046951fca392-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-101\" (UID: \"96344e2a91cea76acc74046951fca392\") " pod="kube-system/kube-scheduler-ip-172-31-23-101" May 27 17:45:58.614905 kubelet[2886]: I0527 17:45:58.614626 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c330fc28422bae0efc20464502555c3-ca-certs\") pod \"kube-apiserver-ip-172-31-23-101\" (UID: \"3c330fc28422bae0efc20464502555c3\") " pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:45:58.615010 kubelet[2886]: I0527 17:45:58.614986 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c330fc28422bae0efc20464502555c3-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-101\" (UID: \"3c330fc28422bae0efc20464502555c3\") " pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:45:58.615128 kubelet[2886]: I0527 17:45:58.615028 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:45:58.615128 kubelet[2886]: I0527 17:45:58.615054 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:45:58.615592 kubelet[2886]: I0527 17:45:58.615568 2886 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-101" May 27 17:45:58.615940 kubelet[2886]: E0527 17:45:58.615911 2886 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.101:6443/api/v1/nodes\": dial tcp 172.31.23.101:6443: connect: connection refused" node="ip-172-31-23-101" May 27 17:45:58.617594 kubelet[2886]: E0527 17:45:58.617555 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-101?timeout=10s\": dial tcp 172.31.23.101:6443: connect: connection refused" interval="400ms" May 27 17:45:58.818069 kubelet[2886]: I0527 17:45:58.818032 2886 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-101" May 27 17:45:58.818781 kubelet[2886]: E0527 17:45:58.818453 2886 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.101:6443/api/v1/nodes\": dial tcp 172.31.23.101:6443: connect: connection refused" node="ip-172-31-23-101" May 27 17:45:58.877802 containerd[2001]: time="2025-05-27T17:45:58.877754222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-101,Uid:3c330fc28422bae0efc20464502555c3,Namespace:kube-system,Attempt:0,}" May 27 17:45:58.885676 containerd[2001]: time="2025-05-27T17:45:58.885408150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-101,Uid:aea2301cee29518de662977cde6c47de,Namespace:kube-system,Attempt:0,}" May 27 17:45:58.903287 containerd[2001]: time="2025-05-27T17:45:58.903238913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-101,Uid:96344e2a91cea76acc74046951fca392,Namespace:kube-system,Attempt:0,}" May 27 17:45:59.019153 kubelet[2886]: E0527 17:45:59.018870 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-101?timeout=10s\": dial tcp 172.31.23.101:6443: connect: connection refused" interval="800ms" May 27 17:45:59.114659 containerd[2001]: time="2025-05-27T17:45:59.113688361Z" level=info msg="connecting to shim 55476997cc3824c8c73ef0eb559286ce8b5316d2f21897d239499e68125fc606" address="unix:///run/containerd/s/6331948cb78ce3d734d6c98f932cd20849fdca9903b2f077a24121eda3a9c33e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:45:59.119585 containerd[2001]: time="2025-05-27T17:45:59.119531581Z" level=info msg="connecting to shim a29abf484555c3a1df960628616b93846dc9bf6d72ad11c96f6802a89aa43500" address="unix:///run/containerd/s/928d29a6d1a4fe7f5aae8e8d84cfb3744eebbe8874ab5ee77a42eb3d87391b39" namespace=k8s.io protocol=ttrpc version=3 May 27 17:45:59.127289 containerd[2001]: time="2025-05-27T17:45:59.127235375Z" level=info msg="connecting to shim 51216b1712fbaa55be174849ca187659c362aa16a3a7ca8ecc50adfb9016418c" address="unix:///run/containerd/s/91dcfb5f244be530377561206c2d7e0bf2b93dd86cbbf05b4bd15072140a0359" namespace=k8s.io protocol=ttrpc version=3 May 27 17:45:59.216823 kubelet[2886]: E0527 17:45:59.216501 2886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.101:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.101:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-101.184373651854a808 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-101,UID:ip-172-31-23-101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-101,},FirstTimestamp:2025-05-27 17:45:58.386976776 +0000 UTC m=+0.700654080,LastTimestamp:2025-05-27 17:45:58.386976776 +0000 UTC m=+0.700654080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-101,}" May 27 17:45:59.225475 kubelet[2886]: I0527 17:45:59.225436 2886 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-101" May 27 17:45:59.225865 kubelet[2886]: E0527 17:45:59.225826 2886 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.101:6443/api/v1/nodes\": dial tcp 172.31.23.101:6443: connect: connection refused" node="ip-172-31-23-101" May 27 17:45:59.286764 systemd[1]: Started cri-containerd-51216b1712fbaa55be174849ca187659c362aa16a3a7ca8ecc50adfb9016418c.scope - libcontainer container 51216b1712fbaa55be174849ca187659c362aa16a3a7ca8ecc50adfb9016418c. May 27 17:45:59.289385 systemd[1]: Started cri-containerd-55476997cc3824c8c73ef0eb559286ce8b5316d2f21897d239499e68125fc606.scope - libcontainer container 55476997cc3824c8c73ef0eb559286ce8b5316d2f21897d239499e68125fc606. May 27 17:45:59.291461 systemd[1]: Started cri-containerd-a29abf484555c3a1df960628616b93846dc9bf6d72ad11c96f6802a89aa43500.scope - libcontainer container a29abf484555c3a1df960628616b93846dc9bf6d72ad11c96f6802a89aa43500. May 27 17:45:59.303406 kubelet[2886]: E0527 17:45:59.302322 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:45:59.321036 kubelet[2886]: E0527 17:45:59.320981 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:45:59.382720 kubelet[2886]: E0527 17:45:59.381593 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:45:59.411620 containerd[2001]: time="2025-05-27T17:45:59.411562314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-101,Uid:3c330fc28422bae0efc20464502555c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a29abf484555c3a1df960628616b93846dc9bf6d72ad11c96f6802a89aa43500\"" May 27 17:45:59.412705 containerd[2001]: time="2025-05-27T17:45:59.412590914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-101,Uid:aea2301cee29518de662977cde6c47de,Namespace:kube-system,Attempt:0,} returns sandbox id \"51216b1712fbaa55be174849ca187659c362aa16a3a7ca8ecc50adfb9016418c\"" May 27 17:45:59.421682 containerd[2001]: time="2025-05-27T17:45:59.421564809Z" level=info msg="CreateContainer within sandbox \"51216b1712fbaa55be174849ca187659c362aa16a3a7ca8ecc50adfb9016418c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:45:59.423710 containerd[2001]: time="2025-05-27T17:45:59.423622037Z" level=info msg="CreateContainer within sandbox \"a29abf484555c3a1df960628616b93846dc9bf6d72ad11c96f6802a89aa43500\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:45:59.443255 containerd[2001]: time="2025-05-27T17:45:59.443199207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-101,Uid:96344e2a91cea76acc74046951fca392,Namespace:kube-system,Attempt:0,} returns sandbox id \"55476997cc3824c8c73ef0eb559286ce8b5316d2f21897d239499e68125fc606\"" May 27 17:45:59.453581 containerd[2001]: time="2025-05-27T17:45:59.453534943Z" level=info msg="CreateContainer within sandbox \"55476997cc3824c8c73ef0eb559286ce8b5316d2f21897d239499e68125fc606\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:45:59.464317 containerd[2001]: time="2025-05-27T17:45:59.464253949Z" level=info msg="Container 62ebf45529ef585c7df5718ac9343d725f39c34cd83d3dc681aac2f11f19c14d: CDI devices from CRI Config.CDIDevices: []" May 27 17:45:59.467927 containerd[2001]: time="2025-05-27T17:45:59.467888598Z" level=info msg="Container e30d97ff6ec3da7177b4639da545cf2875e6977751cd847d8559c2cceebc71bd: CDI devices from CRI Config.CDIDevices: []" May 27 17:45:59.475510 containerd[2001]: time="2025-05-27T17:45:59.475075761Z" level=info msg="Container ea2fc64ae337263f9fa66f40deaf12542b0a8ec19421e36a2d20427b31f43fa6: CDI devices from CRI Config.CDIDevices: []" May 27 17:45:59.483033 containerd[2001]: time="2025-05-27T17:45:59.482963390Z" level=info msg="CreateContainer within sandbox \"51216b1712fbaa55be174849ca187659c362aa16a3a7ca8ecc50adfb9016418c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"62ebf45529ef585c7df5718ac9343d725f39c34cd83d3dc681aac2f11f19c14d\"" May 27 17:45:59.484012 containerd[2001]: time="2025-05-27T17:45:59.483959820Z" level=info msg="StartContainer for \"62ebf45529ef585c7df5718ac9343d725f39c34cd83d3dc681aac2f11f19c14d\"" May 27 17:45:59.486359 containerd[2001]: time="2025-05-27T17:45:59.486321172Z" level=info msg="connecting to shim 62ebf45529ef585c7df5718ac9343d725f39c34cd83d3dc681aac2f11f19c14d" address="unix:///run/containerd/s/91dcfb5f244be530377561206c2d7e0bf2b93dd86cbbf05b4bd15072140a0359" protocol=ttrpc version=3 May 27 17:45:59.487844 containerd[2001]: time="2025-05-27T17:45:59.487797519Z" level=info msg="CreateContainer within sandbox \"55476997cc3824c8c73ef0eb559286ce8b5316d2f21897d239499e68125fc606\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea2fc64ae337263f9fa66f40deaf12542b0a8ec19421e36a2d20427b31f43fa6\"" May 27 17:45:59.488711 containerd[2001]: time="2025-05-27T17:45:59.488648190Z" level=info msg="StartContainer for \"ea2fc64ae337263f9fa66f40deaf12542b0a8ec19421e36a2d20427b31f43fa6\"" May 27 17:45:59.492193 containerd[2001]: time="2025-05-27T17:45:59.492155086Z" level=info msg="connecting to shim ea2fc64ae337263f9fa66f40deaf12542b0a8ec19421e36a2d20427b31f43fa6" address="unix:///run/containerd/s/6331948cb78ce3d734d6c98f932cd20849fdca9903b2f077a24121eda3a9c33e" protocol=ttrpc version=3 May 27 17:45:59.494089 containerd[2001]: time="2025-05-27T17:45:59.494045560Z" level=info msg="CreateContainer within sandbox \"a29abf484555c3a1df960628616b93846dc9bf6d72ad11c96f6802a89aa43500\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e30d97ff6ec3da7177b4639da545cf2875e6977751cd847d8559c2cceebc71bd\"" May 27 17:45:59.495881 containerd[2001]: time="2025-05-27T17:45:59.495820906Z" level=info msg="StartContainer for \"e30d97ff6ec3da7177b4639da545cf2875e6977751cd847d8559c2cceebc71bd\"" May 27 17:45:59.500195 containerd[2001]: time="2025-05-27T17:45:59.499571520Z" level=info msg="connecting to shim e30d97ff6ec3da7177b4639da545cf2875e6977751cd847d8559c2cceebc71bd" address="unix:///run/containerd/s/928d29a6d1a4fe7f5aae8e8d84cfb3744eebbe8874ab5ee77a42eb3d87391b39" protocol=ttrpc version=3 May 27 17:45:59.526657 systemd[1]: Started cri-containerd-ea2fc64ae337263f9fa66f40deaf12542b0a8ec19421e36a2d20427b31f43fa6.scope - libcontainer container ea2fc64ae337263f9fa66f40deaf12542b0a8ec19421e36a2d20427b31f43fa6. May 27 17:45:59.538487 systemd[1]: Started cri-containerd-62ebf45529ef585c7df5718ac9343d725f39c34cd83d3dc681aac2f11f19c14d.scope - libcontainer container 62ebf45529ef585c7df5718ac9343d725f39c34cd83d3dc681aac2f11f19c14d. May 27 17:45:59.549703 systemd[1]: Started cri-containerd-e30d97ff6ec3da7177b4639da545cf2875e6977751cd847d8559c2cceebc71bd.scope - libcontainer container e30d97ff6ec3da7177b4639da545cf2875e6977751cd847d8559c2cceebc71bd. May 27 17:45:59.653136 containerd[2001]: time="2025-05-27T17:45:59.651843640Z" level=info msg="StartContainer for \"62ebf45529ef585c7df5718ac9343d725f39c34cd83d3dc681aac2f11f19c14d\" returns successfully" May 27 17:45:59.668450 containerd[2001]: time="2025-05-27T17:45:59.667507035Z" level=info msg="StartContainer for \"ea2fc64ae337263f9fa66f40deaf12542b0a8ec19421e36a2d20427b31f43fa6\" returns successfully" May 27 17:45:59.680199 containerd[2001]: time="2025-05-27T17:45:59.680131254Z" level=info msg="StartContainer for \"e30d97ff6ec3da7177b4639da545cf2875e6977751cd847d8559c2cceebc71bd\" returns successfully" May 27 17:45:59.734468 kubelet[2886]: E0527 17:45:59.733379 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-101&limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:45:59.821675 kubelet[2886]: E0527 17:45:59.821625 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-101?timeout=10s\": dial tcp 172.31.23.101:6443: connect: connection refused" interval="1.6s" May 27 17:46:00.030760 kubelet[2886]: I0527 17:46:00.030655 2886 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-101" May 27 17:46:00.031213 kubelet[2886]: E0527 17:46:00.031026 2886 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.101:6443/api/v1/nodes\": dial tcp 172.31.23.101:6443: connect: connection refused" node="ip-172-31-23-101" May 27 17:46:00.447368 kubelet[2886]: E0527 17:46:00.447324 2886 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:46:00.488807 kubelet[2886]: E0527 17:46:00.488775 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:00.497347 kubelet[2886]: E0527 17:46:00.497312 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:00.497518 kubelet[2886]: E0527 17:46:00.497377 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:01.422284 kubelet[2886]: E0527 17:46:01.422236 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-101?timeout=10s\": dial tcp 172.31.23.101:6443: connect: connection refused" interval="3.2s" May 27 17:46:01.492397 kubelet[2886]: E0527 17:46:01.492338 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:46:01.500755 kubelet[2886]: E0527 17:46:01.500719 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:01.502545 kubelet[2886]: E0527 17:46:01.502510 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:01.502973 kubelet[2886]: E0527 17:46:01.502950 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:01.634011 kubelet[2886]: I0527 17:46:01.633979 2886 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-101" May 27 17:46:01.634367 kubelet[2886]: E0527 17:46:01.634338 2886 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.101:6443/api/v1/nodes\": dial tcp 172.31.23.101:6443: connect: connection refused" node="ip-172-31-23-101" May 27 17:46:01.710080 kubelet[2886]: E0527 17:46:01.709535 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-101&limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:46:01.990188 kubelet[2886]: E0527 17:46:01.989722 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:46:02.236921 kubelet[2886]: E0527 17:46:02.236861 2886 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:46:02.504191 kubelet[2886]: E0527 17:46:02.504145 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:02.505028 kubelet[2886]: E0527 17:46:02.504694 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:03.894863 kubelet[2886]: E0527 17:46:03.894828 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:04.473555 kubelet[2886]: E0527 17:46:04.473520 2886 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-101" not found May 27 17:46:04.628244 kubelet[2886]: E0527 17:46:04.628209 2886 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:04.836890 kubelet[2886]: I0527 17:46:04.836839 2886 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-101" May 27 17:46:04.841259 kubelet[2886]: E0527 17:46:04.841192 2886 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-101" not found May 27 17:46:04.854769 kubelet[2886]: I0527 17:46:04.854720 2886 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-101" May 27 17:46:04.854769 kubelet[2886]: E0527 17:46:04.854765 2886 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-101\": node \"ip-172-31-23-101\" not found" May 27 17:46:04.867515 kubelet[2886]: E0527 17:46:04.867461 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:04.967749 kubelet[2886]: E0527 17:46:04.967705 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.069207 kubelet[2886]: E0527 17:46:05.069155 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.170144 kubelet[2886]: E0527 17:46:05.170031 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.270538 kubelet[2886]: E0527 17:46:05.270499 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.371191 kubelet[2886]: E0527 17:46:05.371144 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.471894 kubelet[2886]: E0527 17:46:05.471789 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.572075 kubelet[2886]: E0527 17:46:05.572023 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.673482 kubelet[2886]: E0527 17:46:05.673357 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.773984 kubelet[2886]: E0527 17:46:05.773835 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.874024 kubelet[2886]: E0527 17:46:05.873966 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:05.974353 kubelet[2886]: E0527 17:46:05.974296 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.075288 kubelet[2886]: E0527 17:46:06.075238 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.175838 kubelet[2886]: E0527 17:46:06.175800 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.177609 systemd[1]: Reload requested from client PID 3170 ('systemctl') (unit session-9.scope)... May 27 17:46:06.177627 systemd[1]: Reloading... May 27 17:46:06.242133 update_engine[1981]: I20250527 17:46:06.241479 1981 update_attempter.cc:509] Updating boot flags... May 27 17:46:06.276162 kubelet[2886]: E0527 17:46:06.276120 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.307499 zram_generator::config[3220]: No configuration found. May 27 17:46:06.346975 kubelet[2886]: E0527 17:46:06.346848 2886 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-101\" not found" node="ip-172-31-23-101" May 27 17:46:06.377446 kubelet[2886]: E0527 17:46:06.377396 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.480589 kubelet[2886]: E0527 17:46:06.480550 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.582628 kubelet[2886]: E0527 17:46:06.582517 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.635279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:06.684472 kubelet[2886]: E0527 17:46:06.684408 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.785308 kubelet[2886]: E0527 17:46:06.785230 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.877821 systemd[1]: Reloading finished in 699 ms. May 27 17:46:06.886465 kubelet[2886]: E0527 17:46:06.886312 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-101\" not found" May 27 17:46:06.986112 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:07.010596 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:46:07.011140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:07.011418 systemd[1]: kubelet.service: Consumed 1.049s CPU time, 127.2M memory peak. May 27 17:46:07.015874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:07.294076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:07.306048 (kubelet)[3458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:46:07.380114 kubelet[3458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:07.380114 kubelet[3458]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:46:07.380114 kubelet[3458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:07.380741 kubelet[3458]: I0527 17:46:07.380223 3458 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:46:07.397470 kubelet[3458]: I0527 17:46:07.396874 3458 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:46:07.397470 kubelet[3458]: I0527 17:46:07.396918 3458 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:46:07.397470 kubelet[3458]: I0527 17:46:07.397326 3458 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:46:07.401654 kubelet[3458]: I0527 17:46:07.401617 3458 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 17:46:07.406060 kubelet[3458]: I0527 17:46:07.406020 3458 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:46:07.413722 kubelet[3458]: I0527 17:46:07.413664 3458 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:46:07.421694 kubelet[3458]: I0527 17:46:07.421667 3458 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:46:07.422817 kubelet[3458]: I0527 17:46:07.422779 3458 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:46:07.425239 kubelet[3458]: I0527 17:46:07.424949 3458 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-101","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:46:07.426166 kubelet[3458]: I0527 17:46:07.425812 3458 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:46:07.426166 kubelet[3458]: I0527 17:46:07.425836 3458 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:46:07.426166 kubelet[3458]: I0527 17:46:07.425909 3458 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:07.426166 kubelet[3458]: I0527 17:46:07.426102 3458 kubelet.go:480] "Attempting to sync node with API server" May 27 17:46:07.426166 kubelet[3458]: I0527 17:46:07.426116 3458 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:46:07.426166 kubelet[3458]: I0527 17:46:07.426146 3458 kubelet.go:386] "Adding apiserver pod source" May 27 17:46:07.426518 kubelet[3458]: I0527 17:46:07.426502 3458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:46:07.431006 kubelet[3458]: I0527 17:46:07.430971 3458 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:46:07.432314 kubelet[3458]: I0527 17:46:07.432286 3458 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:46:07.442341 kubelet[3458]: I0527 17:46:07.442237 3458 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:46:07.442341 kubelet[3458]: I0527 17:46:07.442299 3458 server.go:1289] "Started kubelet" May 27 17:46:07.450465 kubelet[3458]: I0527 17:46:07.449150 3458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:46:07.459204 kubelet[3458]: I0527 17:46:07.458530 3458 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:46:07.464284 kubelet[3458]: I0527 17:46:07.464204 3458 server.go:317] "Adding debug handlers to kubelet server" May 27 17:46:07.464424 kubelet[3458]: I0527 17:46:07.464385 3458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:46:07.466032 kubelet[3458]: I0527 17:46:07.459018 3458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:46:07.466946 kubelet[3458]: I0527 17:46:07.466918 3458 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:46:07.468832 kubelet[3458]: I0527 17:46:07.468734 3458 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:46:07.469100 kubelet[3458]: I0527 17:46:07.469034 3458 reconciler.go:26] "Reconciler: start to sync state" May 27 17:46:07.471537 kubelet[3458]: I0527 17:46:07.471514 3458 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:46:07.473578 kubelet[3458]: I0527 17:46:07.473107 3458 factory.go:223] Registration of the systemd container factory successfully May 27 17:46:07.476641 kubelet[3458]: I0527 17:46:07.476011 3458 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:46:07.476641 kubelet[3458]: E0527 17:46:07.474094 3458 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:46:07.495451 kubelet[3458]: I0527 17:46:07.494918 3458 factory.go:223] Registration of the containerd container factory successfully May 27 17:46:07.495631 kubelet[3458]: I0527 17:46:07.495593 3458 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:46:07.500859 kubelet[3458]: I0527 17:46:07.500826 3458 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:46:07.501062 kubelet[3458]: I0527 17:46:07.501047 3458 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:46:07.501169 kubelet[3458]: I0527 17:46:07.501159 3458 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:46:07.501241 kubelet[3458]: I0527 17:46:07.501233 3458 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:46:07.501370 kubelet[3458]: E0527 17:46:07.501331 3458 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:46:07.562560 kubelet[3458]: I0527 17:46:07.562321 3458 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:46:07.562560 kubelet[3458]: I0527 17:46:07.562343 3458 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:46:07.562560 kubelet[3458]: I0527 17:46:07.562367 3458 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:07.562765 kubelet[3458]: I0527 17:46:07.562576 3458 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:46:07.562765 kubelet[3458]: I0527 17:46:07.562589 3458 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:46:07.562765 kubelet[3458]: I0527 17:46:07.562610 3458 policy_none.go:49] "None policy: Start" May 27 17:46:07.562765 kubelet[3458]: I0527 17:46:07.562623 3458 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:46:07.562765 kubelet[3458]: I0527 17:46:07.562635 3458 state_mem.go:35] "Initializing new in-memory state store" May 27 17:46:07.562765 kubelet[3458]: I0527 17:46:07.562758 3458 state_mem.go:75] "Updated machine memory state" May 27 17:46:07.568678 kubelet[3458]: E0527 17:46:07.568657 3458 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:46:07.570445 kubelet[3458]: I0527 17:46:07.569259 3458 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:46:07.570445 kubelet[3458]: I0527 17:46:07.569277 3458 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:46:07.570445 kubelet[3458]: I0527 17:46:07.569624 3458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:46:07.576701 kubelet[3458]: E0527 17:46:07.576670 3458 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:46:07.603050 kubelet[3458]: I0527 17:46:07.602946 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:46:07.605005 kubelet[3458]: I0527 17:46:07.603777 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-101" May 27 17:46:07.606382 kubelet[3458]: I0527 17:46:07.604216 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:46:07.671591 kubelet[3458]: I0527 17:46:07.671551 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c330fc28422bae0efc20464502555c3-ca-certs\") pod \"kube-apiserver-ip-172-31-23-101\" (UID: \"3c330fc28422bae0efc20464502555c3\") " pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:46:07.671954 kubelet[3458]: I0527 17:46:07.671790 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c330fc28422bae0efc20464502555c3-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-101\" (UID: \"3c330fc28422bae0efc20464502555c3\") " pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:46:07.671954 kubelet[3458]: I0527 17:46:07.671815 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:46:07.671954 kubelet[3458]: I0527 17:46:07.671832 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:46:07.671954 kubelet[3458]: I0527 17:46:07.671849 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:46:07.671954 kubelet[3458]: I0527 17:46:07.671863 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:46:07.672100 kubelet[3458]: I0527 17:46:07.671880 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c330fc28422bae0efc20464502555c3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-101\" (UID: \"3c330fc28422bae0efc20464502555c3\") " pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:46:07.672100 kubelet[3458]: I0527 17:46:07.671896 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aea2301cee29518de662977cde6c47de-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-101\" (UID: \"aea2301cee29518de662977cde6c47de\") " pod="kube-system/kube-controller-manager-ip-172-31-23-101" May 27 17:46:07.672100 kubelet[3458]: I0527 17:46:07.671914 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96344e2a91cea76acc74046951fca392-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-101\" (UID: \"96344e2a91cea76acc74046951fca392\") " pod="kube-system/kube-scheduler-ip-172-31-23-101" May 27 17:46:07.689490 kubelet[3458]: I0527 17:46:07.688097 3458 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-101" May 27 17:46:07.699401 kubelet[3458]: I0527 17:46:07.698835 3458 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-101" May 27 17:46:07.699401 kubelet[3458]: I0527 17:46:07.699048 3458 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-101" May 27 17:46:08.429897 kubelet[3458]: I0527 17:46:08.429855 3458 apiserver.go:52] "Watching apiserver" May 27 17:46:08.469848 kubelet[3458]: I0527 17:46:08.469806 3458 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:46:08.548598 kubelet[3458]: I0527 17:46:08.548461 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-101" May 27 17:46:08.549946 kubelet[3458]: I0527 17:46:08.549603 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:46:08.560942 kubelet[3458]: E0527 17:46:08.560833 3458 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-101\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-101" May 27 17:46:08.562178 kubelet[3458]: E0527 17:46:08.562136 3458 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-101\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-101" May 27 17:46:08.585647 kubelet[3458]: I0527 17:46:08.585553 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-101" podStartSLOduration=1.585533498 podStartE2EDuration="1.585533498s" podCreationTimestamp="2025-05-27 17:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:08.574891602 +0000 UTC m=+1.251258742" watchObservedRunningTime="2025-05-27 17:46:08.585533498 +0000 UTC m=+1.261900632" May 27 17:46:08.585853 kubelet[3458]: I0527 17:46:08.585744 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-101" podStartSLOduration=1.585720155 podStartE2EDuration="1.585720155s" podCreationTimestamp="2025-05-27 17:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:08.584589182 +0000 UTC m=+1.260956321" watchObservedRunningTime="2025-05-27 17:46:08.585720155 +0000 UTC m=+1.262087292" May 27 17:46:08.627143 kubelet[3458]: I0527 17:46:08.626918 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-101" podStartSLOduration=1.6268970299999999 podStartE2EDuration="1.62689703s" podCreationTimestamp="2025-05-27 17:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:08.601342159 +0000 UTC m=+1.277709302" watchObservedRunningTime="2025-05-27 17:46:08.62689703 +0000 UTC m=+1.303264189" May 27 17:46:13.525851 kubelet[3458]: I0527 17:46:13.525796 3458 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:46:13.526526 kubelet[3458]: I0527 17:46:13.526278 3458 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:46:13.526574 containerd[2001]: time="2025-05-27T17:46:13.526085972Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:46:14.594736 systemd[1]: Created slice kubepods-besteffort-pod2407ced5_65dd_4675_a84e_51853725637c.slice - libcontainer container kubepods-besteffort-pod2407ced5_65dd_4675_a84e_51853725637c.slice. May 27 17:46:14.617730 kubelet[3458]: I0527 17:46:14.617691 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2407ced5-65dd-4675-a84e-51853725637c-kube-proxy\") pod \"kube-proxy-sw76j\" (UID: \"2407ced5-65dd-4675-a84e-51853725637c\") " pod="kube-system/kube-proxy-sw76j" May 27 17:46:14.617730 kubelet[3458]: I0527 17:46:14.617730 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2407ced5-65dd-4675-a84e-51853725637c-xtables-lock\") pod \"kube-proxy-sw76j\" (UID: \"2407ced5-65dd-4675-a84e-51853725637c\") " pod="kube-system/kube-proxy-sw76j" May 27 17:46:14.618136 kubelet[3458]: I0527 17:46:14.617750 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2407ced5-65dd-4675-a84e-51853725637c-lib-modules\") pod \"kube-proxy-sw76j\" (UID: \"2407ced5-65dd-4675-a84e-51853725637c\") " pod="kube-system/kube-proxy-sw76j" May 27 17:46:14.618136 kubelet[3458]: I0527 17:46:14.617766 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hv8j\" (UniqueName: \"kubernetes.io/projected/2407ced5-65dd-4675-a84e-51853725637c-kube-api-access-5hv8j\") pod \"kube-proxy-sw76j\" (UID: \"2407ced5-65dd-4675-a84e-51853725637c\") " pod="kube-system/kube-proxy-sw76j" May 27 17:46:14.743671 systemd[1]: Created slice kubepods-besteffort-pod8305c330_74bb_4405_a803_20f08594a8ab.slice - libcontainer container kubepods-besteffort-pod8305c330_74bb_4405_a803_20f08594a8ab.slice. May 27 17:46:14.820600 kubelet[3458]: I0527 17:46:14.820550 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79n8z\" (UniqueName: \"kubernetes.io/projected/8305c330-74bb-4405-a803-20f08594a8ab-kube-api-access-79n8z\") pod \"tigera-operator-844669ff44-rjlj2\" (UID: \"8305c330-74bb-4405-a803-20f08594a8ab\") " pod="tigera-operator/tigera-operator-844669ff44-rjlj2" May 27 17:46:14.820600 kubelet[3458]: I0527 17:46:14.820612 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8305c330-74bb-4405-a803-20f08594a8ab-var-lib-calico\") pod \"tigera-operator-844669ff44-rjlj2\" (UID: \"8305c330-74bb-4405-a803-20f08594a8ab\") " pod="tigera-operator/tigera-operator-844669ff44-rjlj2" May 27 17:46:14.905679 containerd[2001]: time="2025-05-27T17:46:14.905570451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sw76j,Uid:2407ced5-65dd-4675-a84e-51853725637c,Namespace:kube-system,Attempt:0,}" May 27 17:46:14.939650 containerd[2001]: time="2025-05-27T17:46:14.939576997Z" level=info msg="connecting to shim 10a61d2e1fa63247cf25f1c591bc5e3bc58e333311decdba32863c0bc3682246" address="unix:///run/containerd/s/0ec5ca09c8a4930a4bbee9c9e4c1fcd4c193352ecb114fa1751d7f02268cf25c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:14.969695 systemd[1]: Started cri-containerd-10a61d2e1fa63247cf25f1c591bc5e3bc58e333311decdba32863c0bc3682246.scope - libcontainer container 10a61d2e1fa63247cf25f1c591bc5e3bc58e333311decdba32863c0bc3682246. May 27 17:46:15.011814 containerd[2001]: time="2025-05-27T17:46:15.011381261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sw76j,Uid:2407ced5-65dd-4675-a84e-51853725637c,Namespace:kube-system,Attempt:0,} returns sandbox id \"10a61d2e1fa63247cf25f1c591bc5e3bc58e333311decdba32863c0bc3682246\"" May 27 17:46:15.026604 containerd[2001]: time="2025-05-27T17:46:15.026564043Z" level=info msg="CreateContainer within sandbox \"10a61d2e1fa63247cf25f1c591bc5e3bc58e333311decdba32863c0bc3682246\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:46:15.045075 containerd[2001]: time="2025-05-27T17:46:15.045011436Z" level=info msg="Container 130ea6cba944aba050f21a2598fec07918d7e1cdfc7e5c0b9efab0a9a94609b4: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:15.050841 containerd[2001]: time="2025-05-27T17:46:15.050796396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rjlj2,Uid:8305c330-74bb-4405-a803-20f08594a8ab,Namespace:tigera-operator,Attempt:0,}" May 27 17:46:15.059775 containerd[2001]: time="2025-05-27T17:46:15.059731903Z" level=info msg="CreateContainer within sandbox \"10a61d2e1fa63247cf25f1c591bc5e3bc58e333311decdba32863c0bc3682246\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"130ea6cba944aba050f21a2598fec07918d7e1cdfc7e5c0b9efab0a9a94609b4\"" May 27 17:46:15.060359 containerd[2001]: time="2025-05-27T17:46:15.060276461Z" level=info msg="StartContainer for \"130ea6cba944aba050f21a2598fec07918d7e1cdfc7e5c0b9efab0a9a94609b4\"" May 27 17:46:15.062596 containerd[2001]: time="2025-05-27T17:46:15.062564415Z" level=info msg="connecting to shim 130ea6cba944aba050f21a2598fec07918d7e1cdfc7e5c0b9efab0a9a94609b4" address="unix:///run/containerd/s/0ec5ca09c8a4930a4bbee9c9e4c1fcd4c193352ecb114fa1751d7f02268cf25c" protocol=ttrpc version=3 May 27 17:46:15.086846 systemd[1]: Started cri-containerd-130ea6cba944aba050f21a2598fec07918d7e1cdfc7e5c0b9efab0a9a94609b4.scope - libcontainer container 130ea6cba944aba050f21a2598fec07918d7e1cdfc7e5c0b9efab0a9a94609b4. May 27 17:46:15.112015 containerd[2001]: time="2025-05-27T17:46:15.111945385Z" level=info msg="connecting to shim 68be4e11cca0b9cc4fc8864b5a210dfc4361c190fd00a72967cc87e0db45107a" address="unix:///run/containerd/s/f975fcae1ae5e441b0b80dddf6489facc236bcd8c5dfb9661ffcd55c0eed7f5f" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:15.150675 systemd[1]: Started cri-containerd-68be4e11cca0b9cc4fc8864b5a210dfc4361c190fd00a72967cc87e0db45107a.scope - libcontainer container 68be4e11cca0b9cc4fc8864b5a210dfc4361c190fd00a72967cc87e0db45107a. May 27 17:46:15.167840 containerd[2001]: time="2025-05-27T17:46:15.167305538Z" level=info msg="StartContainer for \"130ea6cba944aba050f21a2598fec07918d7e1cdfc7e5c0b9efab0a9a94609b4\" returns successfully" May 27 17:46:15.217053 containerd[2001]: time="2025-05-27T17:46:15.217010506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rjlj2,Uid:8305c330-74bb-4405-a803-20f08594a8ab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"68be4e11cca0b9cc4fc8864b5a210dfc4361c190fd00a72967cc87e0db45107a\"" May 27 17:46:15.218798 containerd[2001]: time="2025-05-27T17:46:15.218767335Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:46:15.770590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358578095.mount: Deactivated successfully. May 27 17:46:16.470631 kubelet[3458]: I0527 17:46:16.470386 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sw76j" podStartSLOduration=2.470365318 podStartE2EDuration="2.470365318s" podCreationTimestamp="2025-05-27 17:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:15.583097888 +0000 UTC m=+8.259465041" watchObservedRunningTime="2025-05-27 17:46:16.470365318 +0000 UTC m=+9.146732459" May 27 17:46:16.843227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900085441.mount: Deactivated successfully. May 27 17:46:17.869650 containerd[2001]: time="2025-05-27T17:46:17.869469619Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:17.871279 containerd[2001]: time="2025-05-27T17:46:17.871190308Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 17:46:17.873351 containerd[2001]: time="2025-05-27T17:46:17.873264712Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:17.882446 containerd[2001]: time="2025-05-27T17:46:17.881684467Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:17.883813 containerd[2001]: time="2025-05-27T17:46:17.883758646Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.663655859s" May 27 17:46:17.883813 containerd[2001]: time="2025-05-27T17:46:17.883804032Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 17:46:17.890080 containerd[2001]: time="2025-05-27T17:46:17.890023752Z" level=info msg="CreateContainer within sandbox \"68be4e11cca0b9cc4fc8864b5a210dfc4361c190fd00a72967cc87e0db45107a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:46:17.903916 containerd[2001]: time="2025-05-27T17:46:17.902531156Z" level=info msg="Container 67904ef1ad3f188d4c2f4c66fdc2b5d790e11099d598103664f3a978727ebcd9: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:17.919312 containerd[2001]: time="2025-05-27T17:46:17.919253141Z" level=info msg="CreateContainer within sandbox \"68be4e11cca0b9cc4fc8864b5a210dfc4361c190fd00a72967cc87e0db45107a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"67904ef1ad3f188d4c2f4c66fdc2b5d790e11099d598103664f3a978727ebcd9\"" May 27 17:46:17.920260 containerd[2001]: time="2025-05-27T17:46:17.920235516Z" level=info msg="StartContainer for \"67904ef1ad3f188d4c2f4c66fdc2b5d790e11099d598103664f3a978727ebcd9\"" May 27 17:46:17.921405 containerd[2001]: time="2025-05-27T17:46:17.921344589Z" level=info msg="connecting to shim 67904ef1ad3f188d4c2f4c66fdc2b5d790e11099d598103664f3a978727ebcd9" address="unix:///run/containerd/s/f975fcae1ae5e441b0b80dddf6489facc236bcd8c5dfb9661ffcd55c0eed7f5f" protocol=ttrpc version=3 May 27 17:46:17.954665 systemd[1]: Started cri-containerd-67904ef1ad3f188d4c2f4c66fdc2b5d790e11099d598103664f3a978727ebcd9.scope - libcontainer container 67904ef1ad3f188d4c2f4c66fdc2b5d790e11099d598103664f3a978727ebcd9. May 27 17:46:18.003622 containerd[2001]: time="2025-05-27T17:46:18.003570818Z" level=info msg="StartContainer for \"67904ef1ad3f188d4c2f4c66fdc2b5d790e11099d598103664f3a978727ebcd9\" returns successfully" May 27 17:46:23.438283 sudo[2358]: pam_unix(sudo:session): session closed for user root May 27 17:46:23.461734 sshd[2357]: Connection closed by 139.178.68.195 port 45160 May 27 17:46:23.463420 sshd-session[2355]: pam_unix(sshd:session): session closed for user core May 27 17:46:23.469869 systemd[1]: sshd@8-172.31.23.101:22-139.178.68.195:45160.service: Deactivated successfully. May 27 17:46:23.477358 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:46:23.478823 systemd[1]: session-9.scope: Consumed 8.026s CPU time, 151.3M memory peak. May 27 17:46:23.485483 systemd-logind[1979]: Session 9 logged out. Waiting for processes to exit. May 27 17:46:23.489604 systemd-logind[1979]: Removed session 9. May 27 17:46:28.746080 kubelet[3458]: I0527 17:46:28.745936 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-rjlj2" podStartSLOduration=12.077065291 podStartE2EDuration="14.744399003s" podCreationTimestamp="2025-05-27 17:46:14 +0000 UTC" firstStartedPulling="2025-05-27 17:46:15.218367516 +0000 UTC m=+7.894734634" lastFinishedPulling="2025-05-27 17:46:17.885701229 +0000 UTC m=+10.562068346" observedRunningTime="2025-05-27 17:46:18.587222255 +0000 UTC m=+11.263589391" watchObservedRunningTime="2025-05-27 17:46:28.744399003 +0000 UTC m=+21.420766147" May 27 17:46:28.764481 systemd[1]: Created slice kubepods-besteffort-pod9ed5d421_abb4_43b7_9563_2dd78b068bb7.slice - libcontainer container kubepods-besteffort-pod9ed5d421_abb4_43b7_9563_2dd78b068bb7.slice. May 27 17:46:28.849478 kubelet[3458]: I0527 17:46:28.849417 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ed5d421-abb4-43b7-9563-2dd78b068bb7-tigera-ca-bundle\") pod \"calico-typha-8566469dd7-s8vn2\" (UID: \"9ed5d421-abb4-43b7-9563-2dd78b068bb7\") " pod="calico-system/calico-typha-8566469dd7-s8vn2" May 27 17:46:28.849627 kubelet[3458]: I0527 17:46:28.849580 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ed5d421-abb4-43b7-9563-2dd78b068bb7-typha-certs\") pod \"calico-typha-8566469dd7-s8vn2\" (UID: \"9ed5d421-abb4-43b7-9563-2dd78b068bb7\") " pod="calico-system/calico-typha-8566469dd7-s8vn2" May 27 17:46:28.849697 kubelet[3458]: I0527 17:46:28.849629 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlk75\" (UniqueName: \"kubernetes.io/projected/9ed5d421-abb4-43b7-9563-2dd78b068bb7-kube-api-access-mlk75\") pod \"calico-typha-8566469dd7-s8vn2\" (UID: \"9ed5d421-abb4-43b7-9563-2dd78b068bb7\") " pod="calico-system/calico-typha-8566469dd7-s8vn2" May 27 17:46:29.075444 containerd[2001]: time="2025-05-27T17:46:29.075354649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8566469dd7-s8vn2,Uid:9ed5d421-abb4-43b7-9563-2dd78b068bb7,Namespace:calico-system,Attempt:0,}" May 27 17:46:29.135718 systemd[1]: Created slice kubepods-besteffort-podb5d4e884_d2a2_4b8f_b133_c3366952d971.slice - libcontainer container kubepods-besteffort-podb5d4e884_d2a2_4b8f_b133_c3366952d971.slice. May 27 17:46:29.155452 containerd[2001]: time="2025-05-27T17:46:29.155377417Z" level=info msg="connecting to shim 888ee0f65c8af2fb7dfd6ff2ac05162eb8a8635a25e8e8452b2a080f703743d7" address="unix:///run/containerd/s/afeebb9aeb44d4ec05faa0c87ec6d9e2447cebc35f152f643d2ee43323701ba0" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:29.201756 systemd[1]: Started cri-containerd-888ee0f65c8af2fb7dfd6ff2ac05162eb8a8635a25e8e8452b2a080f703743d7.scope - libcontainer container 888ee0f65c8af2fb7dfd6ff2ac05162eb8a8635a25e8e8452b2a080f703743d7. May 27 17:46:29.253154 kubelet[3458]: I0527 17:46:29.253110 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-cni-log-dir\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253154 kubelet[3458]: I0527 17:46:29.253156 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-cni-bin-dir\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253338 kubelet[3458]: I0527 17:46:29.253178 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5d4e884-d2a2-4b8f-b133-c3366952d971-tigera-ca-bundle\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253338 kubelet[3458]: I0527 17:46:29.253215 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-policysync\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253338 kubelet[3458]: I0527 17:46:29.253235 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55tp9\" (UniqueName: \"kubernetes.io/projected/b5d4e884-d2a2-4b8f-b133-c3366952d971-kube-api-access-55tp9\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253338 kubelet[3458]: I0527 17:46:29.253264 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-lib-modules\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253338 kubelet[3458]: I0527 17:46:29.253286 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b5d4e884-d2a2-4b8f-b133-c3366952d971-node-certs\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253574 kubelet[3458]: I0527 17:46:29.253309 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-var-lib-calico\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253574 kubelet[3458]: I0527 17:46:29.253336 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-flexvol-driver-host\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253574 kubelet[3458]: I0527 17:46:29.253361 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-var-run-calico\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253574 kubelet[3458]: I0527 17:46:29.253385 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-cni-net-dir\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.253574 kubelet[3458]: I0527 17:46:29.253408 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5d4e884-d2a2-4b8f-b133-c3366952d971-xtables-lock\") pod \"calico-node-sf7vs\" (UID: \"b5d4e884-d2a2-4b8f-b133-c3366952d971\") " pod="calico-system/calico-node-sf7vs" May 27 17:46:29.281335 containerd[2001]: time="2025-05-27T17:46:29.281291516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8566469dd7-s8vn2,Uid:9ed5d421-abb4-43b7-9563-2dd78b068bb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"888ee0f65c8af2fb7dfd6ff2ac05162eb8a8635a25e8e8452b2a080f703743d7\"" May 27 17:46:29.283583 containerd[2001]: time="2025-05-27T17:46:29.283543595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:46:29.369533 kubelet[3458]: E0527 17:46:29.361341 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.369533 kubelet[3458]: W0527 17:46:29.361370 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.369533 kubelet[3458]: E0527 17:46:29.366332 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.376451 kubelet[3458]: E0527 17:46:29.376090 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.376451 kubelet[3458]: W0527 17:46:29.376121 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.376451 kubelet[3458]: E0527 17:46:29.376146 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.402291 kubelet[3458]: E0527 17:46:29.402258 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.402291 kubelet[3458]: W0527 17:46:29.402287 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.402912 kubelet[3458]: E0527 17:46:29.402880 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.452653 containerd[2001]: time="2025-05-27T17:46:29.452599610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sf7vs,Uid:b5d4e884-d2a2-4b8f-b133-c3366952d971,Namespace:calico-system,Attempt:0,}" May 27 17:46:29.507450 containerd[2001]: time="2025-05-27T17:46:29.506858883Z" level=info msg="connecting to shim ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080" address="unix:///run/containerd/s/700d0273561ed625c22211e4482d163e8520dbdf8d4daa373bf09ccf2830952e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:29.577671 systemd[1]: Started cri-containerd-ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080.scope - libcontainer container ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080. May 27 17:46:29.630824 kubelet[3458]: E0527 17:46:29.630699 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2sxmb" podUID="b0f763d7-ff63-4f3f-93a2-96ef656a46c5" May 27 17:46:29.702000 kubelet[3458]: E0527 17:46:29.701949 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.702000 kubelet[3458]: W0527 17:46:29.701994 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.702313 kubelet[3458]: E0527 17:46:29.702019 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.702369 kubelet[3458]: E0527 17:46:29.702313 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.702369 kubelet[3458]: W0527 17:46:29.702324 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.702369 kubelet[3458]: E0527 17:46:29.702337 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.702846 kubelet[3458]: E0527 17:46:29.702638 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.702846 kubelet[3458]: W0527 17:46:29.702649 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.702846 kubelet[3458]: E0527 17:46:29.702661 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.703370 kubelet[3458]: E0527 17:46:29.702966 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.703370 kubelet[3458]: W0527 17:46:29.702977 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.703370 kubelet[3458]: E0527 17:46:29.703004 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.703370 kubelet[3458]: E0527 17:46:29.703290 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.703370 kubelet[3458]: W0527 17:46:29.703317 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.703370 kubelet[3458]: E0527 17:46:29.703331 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.704807 kubelet[3458]: E0527 17:46:29.703570 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.704807 kubelet[3458]: W0527 17:46:29.703581 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.704807 kubelet[3458]: E0527 17:46:29.703607 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.704807 kubelet[3458]: E0527 17:46:29.703910 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.704807 kubelet[3458]: W0527 17:46:29.703935 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.704807 kubelet[3458]: E0527 17:46:29.703948 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.704807 kubelet[3458]: E0527 17:46:29.704200 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.704807 kubelet[3458]: W0527 17:46:29.704210 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.704807 kubelet[3458]: E0527 17:46:29.704222 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.704807 kubelet[3458]: E0527 17:46:29.704518 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.705237 kubelet[3458]: W0527 17:46:29.704529 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.705237 kubelet[3458]: E0527 17:46:29.704556 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.705237 kubelet[3458]: E0527 17:46:29.704796 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.705237 kubelet[3458]: W0527 17:46:29.704805 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.705237 kubelet[3458]: E0527 17:46:29.704833 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.705237 kubelet[3458]: E0527 17:46:29.705060 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.705237 kubelet[3458]: W0527 17:46:29.705069 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.705237 kubelet[3458]: E0527 17:46:29.705080 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.706648 kubelet[3458]: E0527 17:46:29.705265 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.706648 kubelet[3458]: W0527 17:46:29.705274 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.706648 kubelet[3458]: E0527 17:46:29.705284 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.706648 kubelet[3458]: E0527 17:46:29.705572 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.706648 kubelet[3458]: W0527 17:46:29.705584 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.706648 kubelet[3458]: E0527 17:46:29.705610 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.706648 kubelet[3458]: E0527 17:46:29.705847 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.706648 kubelet[3458]: W0527 17:46:29.705857 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.706648 kubelet[3458]: E0527 17:46:29.705869 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.706648 kubelet[3458]: E0527 17:46:29.706097 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.707656 kubelet[3458]: W0527 17:46:29.706106 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.707656 kubelet[3458]: E0527 17:46:29.706119 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.707656 kubelet[3458]: E0527 17:46:29.706329 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.707656 kubelet[3458]: W0527 17:46:29.706338 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.707656 kubelet[3458]: E0527 17:46:29.706349 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.707656 kubelet[3458]: E0527 17:46:29.706610 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.707656 kubelet[3458]: W0527 17:46:29.706621 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.707656 kubelet[3458]: E0527 17:46:29.706633 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.707656 kubelet[3458]: E0527 17:46:29.706814 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.707656 kubelet[3458]: W0527 17:46:29.706825 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.708764 kubelet[3458]: E0527 17:46:29.706836 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.708764 kubelet[3458]: E0527 17:46:29.707015 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.708764 kubelet[3458]: W0527 17:46:29.707024 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.708764 kubelet[3458]: E0527 17:46:29.707035 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.708764 kubelet[3458]: E0527 17:46:29.707233 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.708764 kubelet[3458]: W0527 17:46:29.707242 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.708764 kubelet[3458]: E0527 17:46:29.707253 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.753161 containerd[2001]: time="2025-05-27T17:46:29.753109774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sf7vs,Uid:b5d4e884-d2a2-4b8f-b133-c3366952d971,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080\"" May 27 17:46:29.758095 kubelet[3458]: E0527 17:46:29.757333 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.758095 kubelet[3458]: W0527 17:46:29.757355 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.758095 kubelet[3458]: E0527 17:46:29.757376 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.758095 kubelet[3458]: I0527 17:46:29.757417 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58bfj\" (UniqueName: \"kubernetes.io/projected/b0f763d7-ff63-4f3f-93a2-96ef656a46c5-kube-api-access-58bfj\") pod \"csi-node-driver-2sxmb\" (UID: \"b0f763d7-ff63-4f3f-93a2-96ef656a46c5\") " pod="calico-system/csi-node-driver-2sxmb" May 27 17:46:29.758095 kubelet[3458]: E0527 17:46:29.757676 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.758095 kubelet[3458]: W0527 17:46:29.757739 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.758095 kubelet[3458]: E0527 17:46:29.757771 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.759965 kubelet[3458]: E0527 17:46:29.758461 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.759965 kubelet[3458]: W0527 17:46:29.758475 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.759965 kubelet[3458]: E0527 17:46:29.758491 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.759965 kubelet[3458]: E0527 17:46:29.758816 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.759965 kubelet[3458]: W0527 17:46:29.758830 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.759965 kubelet[3458]: E0527 17:46:29.758844 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.759965 kubelet[3458]: I0527 17:46:29.758885 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b0f763d7-ff63-4f3f-93a2-96ef656a46c5-socket-dir\") pod \"csi-node-driver-2sxmb\" (UID: \"b0f763d7-ff63-4f3f-93a2-96ef656a46c5\") " pod="calico-system/csi-node-driver-2sxmb" May 27 17:46:29.759965 kubelet[3458]: E0527 17:46:29.759121 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.759965 kubelet[3458]: W0527 17:46:29.759131 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.761454 kubelet[3458]: E0527 17:46:29.759143 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.761454 kubelet[3458]: I0527 17:46:29.759180 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b0f763d7-ff63-4f3f-93a2-96ef656a46c5-registration-dir\") pod \"csi-node-driver-2sxmb\" (UID: \"b0f763d7-ff63-4f3f-93a2-96ef656a46c5\") " pod="calico-system/csi-node-driver-2sxmb" May 27 17:46:29.761454 kubelet[3458]: E0527 17:46:29.759994 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.761454 kubelet[3458]: W0527 17:46:29.760007 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.761454 kubelet[3458]: E0527 17:46:29.760022 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.761454 kubelet[3458]: I0527 17:46:29.760046 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b0f763d7-ff63-4f3f-93a2-96ef656a46c5-varrun\") pod \"csi-node-driver-2sxmb\" (UID: \"b0f763d7-ff63-4f3f-93a2-96ef656a46c5\") " pod="calico-system/csi-node-driver-2sxmb" May 27 17:46:29.761454 kubelet[3458]: E0527 17:46:29.761025 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.761454 kubelet[3458]: W0527 17:46:29.761039 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.761454 kubelet[3458]: E0527 17:46:29.761054 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.762464 kubelet[3458]: I0527 17:46:29.761324 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f763d7-ff63-4f3f-93a2-96ef656a46c5-kubelet-dir\") pod \"csi-node-driver-2sxmb\" (UID: \"b0f763d7-ff63-4f3f-93a2-96ef656a46c5\") " pod="calico-system/csi-node-driver-2sxmb" May 27 17:46:29.762464 kubelet[3458]: E0527 17:46:29.761660 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.762464 kubelet[3458]: W0527 17:46:29.761671 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.762464 kubelet[3458]: E0527 17:46:29.761694 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.762464 kubelet[3458]: E0527 17:46:29.762026 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.762464 kubelet[3458]: W0527 17:46:29.762048 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.762464 kubelet[3458]: E0527 17:46:29.762062 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.762464 kubelet[3458]: E0527 17:46:29.762365 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.762464 kubelet[3458]: W0527 17:46:29.762377 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.764030 kubelet[3458]: E0527 17:46:29.762390 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.764030 kubelet[3458]: E0527 17:46:29.762740 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.764030 kubelet[3458]: W0527 17:46:29.762750 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.764030 kubelet[3458]: E0527 17:46:29.762771 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.764030 kubelet[3458]: E0527 17:46:29.763053 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.764030 kubelet[3458]: W0527 17:46:29.763082 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.764030 kubelet[3458]: E0527 17:46:29.763094 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.764030 kubelet[3458]: E0527 17:46:29.763360 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.764030 kubelet[3458]: W0527 17:46:29.763370 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.764030 kubelet[3458]: E0527 17:46:29.763399 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.764405 kubelet[3458]: E0527 17:46:29.763701 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.764405 kubelet[3458]: W0527 17:46:29.763720 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.764405 kubelet[3458]: E0527 17:46:29.763733 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.764405 kubelet[3458]: E0527 17:46:29.764016 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.764405 kubelet[3458]: W0527 17:46:29.764026 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.764405 kubelet[3458]: E0527 17:46:29.764038 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.862679 kubelet[3458]: E0527 17:46:29.862646 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.862679 kubelet[3458]: W0527 17:46:29.862669 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.862849 kubelet[3458]: E0527 17:46:29.862694 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.862927 kubelet[3458]: E0527 17:46:29.862910 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.862974 kubelet[3458]: W0527 17:46:29.862923 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.862974 kubelet[3458]: E0527 17:46:29.862950 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.863174 kubelet[3458]: E0527 17:46:29.863160 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.863174 kubelet[3458]: W0527 17:46:29.863171 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.863236 kubelet[3458]: E0527 17:46:29.863180 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.863411 kubelet[3458]: E0527 17:46:29.863397 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.863411 kubelet[3458]: W0527 17:46:29.863408 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.863526 kubelet[3458]: E0527 17:46:29.863417 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.863783 kubelet[3458]: E0527 17:46:29.863712 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.863783 kubelet[3458]: W0527 17:46:29.863740 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.863783 kubelet[3458]: E0527 17:46:29.863755 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.864090 kubelet[3458]: E0527 17:46:29.864051 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.864090 kubelet[3458]: W0527 17:46:29.864074 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.864202 kubelet[3458]: E0527 17:46:29.864099 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.864461 kubelet[3458]: E0527 17:46:29.864310 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.864461 kubelet[3458]: W0527 17:46:29.864325 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.864461 kubelet[3458]: E0527 17:46:29.864338 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.864811 kubelet[3458]: E0527 17:46:29.864771 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.864811 kubelet[3458]: W0527 17:46:29.864787 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.864811 kubelet[3458]: E0527 17:46:29.864798 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.865001 kubelet[3458]: E0527 17:46:29.864986 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.865001 kubelet[3458]: W0527 17:46:29.864996 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.865057 kubelet[3458]: E0527 17:46:29.865004 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.865225 kubelet[3458]: E0527 17:46:29.865204 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.865225 kubelet[3458]: W0527 17:46:29.865220 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.865399 kubelet[3458]: E0527 17:46:29.865233 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.865457 kubelet[3458]: E0527 17:46:29.865439 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.865457 kubelet[3458]: W0527 17:46:29.865447 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.865506 kubelet[3458]: E0527 17:46:29.865456 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.865618 kubelet[3458]: E0527 17:46:29.865607 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.865618 kubelet[3458]: W0527 17:46:29.865618 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.865678 kubelet[3458]: E0527 17:46:29.865626 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.865810 kubelet[3458]: E0527 17:46:29.865792 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.865810 kubelet[3458]: W0527 17:46:29.865803 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.865864 kubelet[3458]: E0527 17:46:29.865810 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.866278 kubelet[3458]: E0527 17:46:29.866261 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.866278 kubelet[3458]: W0527 17:46:29.866276 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.866333 kubelet[3458]: E0527 17:46:29.866289 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.866972 kubelet[3458]: E0527 17:46:29.866924 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.866972 kubelet[3458]: W0527 17:46:29.866942 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.866972 kubelet[3458]: E0527 17:46:29.866953 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.867137 kubelet[3458]: E0527 17:46:29.867118 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.867137 kubelet[3458]: W0527 17:46:29.867124 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.867137 kubelet[3458]: E0527 17:46:29.867131 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.867322 kubelet[3458]: E0527 17:46:29.867308 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.867322 kubelet[3458]: W0527 17:46:29.867319 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.867375 kubelet[3458]: E0527 17:46:29.867329 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.867593 kubelet[3458]: E0527 17:46:29.867574 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.867630 kubelet[3458]: W0527 17:46:29.867592 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.867630 kubelet[3458]: E0527 17:46:29.867605 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.872924 kubelet[3458]: E0527 17:46:29.872784 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.872924 kubelet[3458]: W0527 17:46:29.872814 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.872924 kubelet[3458]: E0527 17:46:29.872862 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.873642 kubelet[3458]: E0527 17:46:29.873626 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.873835 kubelet[3458]: W0527 17:46:29.873713 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.873835 kubelet[3458]: E0527 17:46:29.873732 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.874876 kubelet[3458]: E0527 17:46:29.874850 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.874876 kubelet[3458]: W0527 17:46:29.874869 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.875064 kubelet[3458]: E0527 17:46:29.874906 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.875487 kubelet[3458]: E0527 17:46:29.875160 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.875487 kubelet[3458]: W0527 17:46:29.875171 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.875487 kubelet[3458]: E0527 17:46:29.875187 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.875622 kubelet[3458]: E0527 17:46:29.875506 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.875622 kubelet[3458]: W0527 17:46:29.875538 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.875622 kubelet[3458]: E0527 17:46:29.875553 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.875911 kubelet[3458]: E0527 17:46:29.875895 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.875911 kubelet[3458]: W0527 17:46:29.875911 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.876015 kubelet[3458]: E0527 17:46:29.875923 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.876224 kubelet[3458]: E0527 17:46:29.876205 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.876282 kubelet[3458]: W0527 17:46:29.876243 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.876282 kubelet[3458]: E0527 17:46:29.876258 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:29.878921 kubelet[3458]: E0527 17:46:29.878900 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:29.878921 kubelet[3458]: W0527 17:46:29.878915 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:29.879086 kubelet[3458]: E0527 17:46:29.878931 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:30.797582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467854059.mount: Deactivated successfully. May 27 17:46:31.504649 kubelet[3458]: E0527 17:46:31.504320 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2sxmb" podUID="b0f763d7-ff63-4f3f-93a2-96ef656a46c5" May 27 17:46:31.692932 containerd[2001]: time="2025-05-27T17:46:31.692013204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:31.694924 containerd[2001]: time="2025-05-27T17:46:31.694838080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 17:46:31.702469 containerd[2001]: time="2025-05-27T17:46:31.702024235Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:31.706393 containerd[2001]: time="2025-05-27T17:46:31.705737347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:31.706393 containerd[2001]: time="2025-05-27T17:46:31.705933823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.422345603s" May 27 17:46:31.706393 containerd[2001]: time="2025-05-27T17:46:31.705967142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 17:46:31.707733 containerd[2001]: time="2025-05-27T17:46:31.707684692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:46:31.730796 containerd[2001]: time="2025-05-27T17:46:31.730748747Z" level=info msg="CreateContainer within sandbox \"888ee0f65c8af2fb7dfd6ff2ac05162eb8a8635a25e8e8452b2a080f703743d7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:46:31.745658 containerd[2001]: time="2025-05-27T17:46:31.745610119Z" level=info msg="Container 29d2a710883b73005dd856913a432259fd2a716ff0c3f1c54f7b9ceef781d0d5: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:31.794294 containerd[2001]: time="2025-05-27T17:46:31.793863452Z" level=info msg="CreateContainer within sandbox \"888ee0f65c8af2fb7dfd6ff2ac05162eb8a8635a25e8e8452b2a080f703743d7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"29d2a710883b73005dd856913a432259fd2a716ff0c3f1c54f7b9ceef781d0d5\"" May 27 17:46:31.795896 containerd[2001]: time="2025-05-27T17:46:31.795664614Z" level=info msg="StartContainer for \"29d2a710883b73005dd856913a432259fd2a716ff0c3f1c54f7b9ceef781d0d5\"" May 27 17:46:31.801018 containerd[2001]: time="2025-05-27T17:46:31.800945568Z" level=info msg="connecting to shim 29d2a710883b73005dd856913a432259fd2a716ff0c3f1c54f7b9ceef781d0d5" address="unix:///run/containerd/s/afeebb9aeb44d4ec05faa0c87ec6d9e2447cebc35f152f643d2ee43323701ba0" protocol=ttrpc version=3 May 27 17:46:31.891832 systemd[1]: Started cri-containerd-29d2a710883b73005dd856913a432259fd2a716ff0c3f1c54f7b9ceef781d0d5.scope - libcontainer container 29d2a710883b73005dd856913a432259fd2a716ff0c3f1c54f7b9ceef781d0d5. May 27 17:46:31.962803 containerd[2001]: time="2025-05-27T17:46:31.962739723Z" level=info msg="StartContainer for \"29d2a710883b73005dd856913a432259fd2a716ff0c3f1c54f7b9ceef781d0d5\" returns successfully" May 27 17:46:32.636001 kubelet[3458]: E0527 17:46:32.635973 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.636001 kubelet[3458]: W0527 17:46:32.635998 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.636807 kubelet[3458]: E0527 17:46:32.636030 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.636807 kubelet[3458]: E0527 17:46:32.636270 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.636807 kubelet[3458]: W0527 17:46:32.636281 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.636807 kubelet[3458]: E0527 17:46:32.636295 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.636807 kubelet[3458]: E0527 17:46:32.636492 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.636807 kubelet[3458]: W0527 17:46:32.636501 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.636807 kubelet[3458]: E0527 17:46:32.636514 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.637183 kubelet[3458]: E0527 17:46:32.636874 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.637183 kubelet[3458]: W0527 17:46:32.636885 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.637183 kubelet[3458]: E0527 17:46:32.636899 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.637643 kubelet[3458]: E0527 17:46:32.637624 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.637643 kubelet[3458]: W0527 17:46:32.637640 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.637816 kubelet[3458]: E0527 17:46:32.637653 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.637866 kubelet[3458]: E0527 17:46:32.637850 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.637866 kubelet[3458]: W0527 17:46:32.637860 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.638042 kubelet[3458]: E0527 17:46:32.637871 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.639490 kubelet[3458]: E0527 17:46:32.639468 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.639582 kubelet[3458]: W0527 17:46:32.639490 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.639582 kubelet[3458]: E0527 17:46:32.639505 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.639828 kubelet[3458]: E0527 17:46:32.639814 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.639828 kubelet[3458]: W0527 17:46:32.639828 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.639936 kubelet[3458]: E0527 17:46:32.639840 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.640097 kubelet[3458]: E0527 17:46:32.640082 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.640157 kubelet[3458]: W0527 17:46:32.640097 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.640157 kubelet[3458]: E0527 17:46:32.640108 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.640302 kubelet[3458]: E0527 17:46:32.640289 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.640352 kubelet[3458]: W0527 17:46:32.640303 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.640352 kubelet[3458]: E0527 17:46:32.640314 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.640641 kubelet[3458]: E0527 17:46:32.640580 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.640641 kubelet[3458]: W0527 17:46:32.640595 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.640641 kubelet[3458]: E0527 17:46:32.640606 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.640810 kubelet[3458]: E0527 17:46:32.640798 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.640810 kubelet[3458]: W0527 17:46:32.640807 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.640892 kubelet[3458]: E0527 17:46:32.640818 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.641449 kubelet[3458]: E0527 17:46:32.640991 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.641449 kubelet[3458]: W0527 17:46:32.641002 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.641449 kubelet[3458]: E0527 17:46:32.641011 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.641449 kubelet[3458]: E0527 17:46:32.641186 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.641449 kubelet[3458]: W0527 17:46:32.641194 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.641449 kubelet[3458]: E0527 17:46:32.641203 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.641449 kubelet[3458]: E0527 17:46:32.641368 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.641449 kubelet[3458]: W0527 17:46:32.641375 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.641449 kubelet[3458]: E0527 17:46:32.641384 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.669693 kubelet[3458]: I0527 17:46:32.669628 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8566469dd7-s8vn2" podStartSLOduration=2.245481846 podStartE2EDuration="4.669607869s" podCreationTimestamp="2025-05-27 17:46:28 +0000 UTC" firstStartedPulling="2025-05-27 17:46:29.283048227 +0000 UTC m=+21.959415357" lastFinishedPulling="2025-05-27 17:46:31.707174251 +0000 UTC m=+24.383541380" observedRunningTime="2025-05-27 17:46:32.664473596 +0000 UTC m=+25.340840736" watchObservedRunningTime="2025-05-27 17:46:32.669607869 +0000 UTC m=+25.345975009" May 27 17:46:32.694981 kubelet[3458]: E0527 17:46:32.694951 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.695147 kubelet[3458]: W0527 17:46:32.694978 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.695147 kubelet[3458]: E0527 17:46:32.695019 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.695322 kubelet[3458]: E0527 17:46:32.695307 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.695374 kubelet[3458]: W0527 17:46:32.695323 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.695374 kubelet[3458]: E0527 17:46:32.695351 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.695652 kubelet[3458]: E0527 17:46:32.695635 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.695652 kubelet[3458]: W0527 17:46:32.695652 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.695751 kubelet[3458]: E0527 17:46:32.695665 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.696093 kubelet[3458]: E0527 17:46:32.696074 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.696165 kubelet[3458]: W0527 17:46:32.696091 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.696165 kubelet[3458]: E0527 17:46:32.696127 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.696394 kubelet[3458]: E0527 17:46:32.696378 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.696468 kubelet[3458]: W0527 17:46:32.696407 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.696468 kubelet[3458]: E0527 17:46:32.696421 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.696815 kubelet[3458]: E0527 17:46:32.696798 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.696876 kubelet[3458]: W0527 17:46:32.696832 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.696876 kubelet[3458]: E0527 17:46:32.696847 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.697149 kubelet[3458]: E0527 17:46:32.697128 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.697203 kubelet[3458]: W0527 17:46:32.697149 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.697203 kubelet[3458]: E0527 17:46:32.697162 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.697464 kubelet[3458]: E0527 17:46:32.697450 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.697464 kubelet[3458]: W0527 17:46:32.697465 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.697575 kubelet[3458]: E0527 17:46:32.697496 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.697813 kubelet[3458]: E0527 17:46:32.697795 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.697868 kubelet[3458]: W0527 17:46:32.697831 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.697868 kubelet[3458]: E0527 17:46:32.697845 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.698204 kubelet[3458]: E0527 17:46:32.698185 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.698315 kubelet[3458]: W0527 17:46:32.698204 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.698369 kubelet[3458]: E0527 17:46:32.698321 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.698689 kubelet[3458]: E0527 17:46:32.698662 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.698758 kubelet[3458]: W0527 17:46:32.698694 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.698758 kubelet[3458]: E0527 17:46:32.698709 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.699070 kubelet[3458]: E0527 17:46:32.699054 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.699126 kubelet[3458]: W0527 17:46:32.699069 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.699126 kubelet[3458]: E0527 17:46:32.699083 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.699772 kubelet[3458]: E0527 17:46:32.699748 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.699836 kubelet[3458]: W0527 17:46:32.699792 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.699836 kubelet[3458]: E0527 17:46:32.699808 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.700257 kubelet[3458]: E0527 17:46:32.700238 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.700257 kubelet[3458]: W0527 17:46:32.700255 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.700366 kubelet[3458]: E0527 17:46:32.700268 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.700632 kubelet[3458]: E0527 17:46:32.700564 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.700632 kubelet[3458]: W0527 17:46:32.700588 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.700632 kubelet[3458]: E0527 17:46:32.700601 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.701122 kubelet[3458]: E0527 17:46:32.700824 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.701122 kubelet[3458]: W0527 17:46:32.700836 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.701122 kubelet[3458]: E0527 17:46:32.700869 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.701122 kubelet[3458]: E0527 17:46:32.701103 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.701122 kubelet[3458]: W0527 17:46:32.701114 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.701122 kubelet[3458]: E0527 17:46:32.701126 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:32.701579 kubelet[3458]: E0527 17:46:32.701562 3458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:32.701579 kubelet[3458]: W0527 17:46:32.701578 3458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:32.701673 kubelet[3458]: E0527 17:46:32.701591 3458 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:33.315462 containerd[2001]: time="2025-05-27T17:46:33.315393544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:33.316444 containerd[2001]: time="2025-05-27T17:46:33.316168320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 17:46:33.317732 containerd[2001]: time="2025-05-27T17:46:33.317701647Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:33.320055 containerd[2001]: time="2025-05-27T17:46:33.320027126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:33.320521 containerd[2001]: time="2025-05-27T17:46:33.320408487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.612491411s" May 27 17:46:33.320521 containerd[2001]: time="2025-05-27T17:46:33.320474472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 17:46:33.326386 containerd[2001]: time="2025-05-27T17:46:33.326341937Z" level=info msg="CreateContainer within sandbox \"ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:46:33.345449 containerd[2001]: time="2025-05-27T17:46:33.342770143Z" level=info msg="Container f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:33.356023 containerd[2001]: time="2025-05-27T17:46:33.355973448Z" level=info msg="CreateContainer within sandbox \"ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7\"" May 27 17:46:33.357098 containerd[2001]: time="2025-05-27T17:46:33.357068777Z" level=info msg="StartContainer for \"f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7\"" May 27 17:46:33.358685 containerd[2001]: time="2025-05-27T17:46:33.358634947Z" level=info msg="connecting to shim f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7" address="unix:///run/containerd/s/700d0273561ed625c22211e4482d163e8520dbdf8d4daa373bf09ccf2830952e" protocol=ttrpc version=3 May 27 17:46:33.386651 systemd[1]: Started cri-containerd-f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7.scope - libcontainer container f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7. May 27 17:46:33.431722 containerd[2001]: time="2025-05-27T17:46:33.431656149Z" level=info msg="StartContainer for \"f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7\" returns successfully" May 27 17:46:33.446397 systemd[1]: cri-containerd-f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7.scope: Deactivated successfully. May 27 17:46:33.446741 systemd[1]: cri-containerd-f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7.scope: Consumed 33ms CPU time, 6.2M memory peak, 4.5M written to disk. May 27 17:46:33.493390 containerd[2001]: time="2025-05-27T17:46:33.493157473Z" level=info msg="received exit event container_id:\"f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7\" id:\"f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7\" pid:4137 exited_at:{seconds:1748367993 nanos:452773297}" May 27 17:46:33.494453 containerd[2001]: time="2025-05-27T17:46:33.494104585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7\" id:\"f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7\" pid:4137 exited_at:{seconds:1748367993 nanos:452773297}" May 27 17:46:33.504705 kubelet[3458]: E0527 17:46:33.503903 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2sxmb" podUID="b0f763d7-ff63-4f3f-93a2-96ef656a46c5" May 27 17:46:33.538998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f72f989403e7d89851fdd0495a349bc597e8fa04551ce6a05db192af2f8e01e7-rootfs.mount: Deactivated successfully. May 27 17:46:33.649481 kubelet[3458]: I0527 17:46:33.647594 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:46:34.648777 containerd[2001]: time="2025-05-27T17:46:34.648131852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:46:35.503475 kubelet[3458]: E0527 17:46:35.502684 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2sxmb" podUID="b0f763d7-ff63-4f3f-93a2-96ef656a46c5" May 27 17:46:37.505340 kubelet[3458]: E0527 17:46:37.503462 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2sxmb" podUID="b0f763d7-ff63-4f3f-93a2-96ef656a46c5" May 27 17:46:37.762102 containerd[2001]: time="2025-05-27T17:46:37.761837166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:37.763769 containerd[2001]: time="2025-05-27T17:46:37.763705074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 17:46:37.766234 containerd[2001]: time="2025-05-27T17:46:37.766162178Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:37.769964 containerd[2001]: time="2025-05-27T17:46:37.769890888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:37.770453 containerd[2001]: time="2025-05-27T17:46:37.770328008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.122065416s" May 27 17:46:37.770453 containerd[2001]: time="2025-05-27T17:46:37.770362106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 17:46:37.777453 containerd[2001]: time="2025-05-27T17:46:37.777380753Z" level=info msg="CreateContainer within sandbox \"ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:46:37.796458 containerd[2001]: time="2025-05-27T17:46:37.792797571Z" level=info msg="Container e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:37.828318 containerd[2001]: time="2025-05-27T17:46:37.827616564Z" level=info msg="CreateContainer within sandbox \"ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d\"" May 27 17:46:37.829900 containerd[2001]: time="2025-05-27T17:46:37.829604423Z" level=info msg="StartContainer for \"e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d\"" May 27 17:46:37.832567 containerd[2001]: time="2025-05-27T17:46:37.832528860Z" level=info msg="connecting to shim e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d" address="unix:///run/containerd/s/700d0273561ed625c22211e4482d163e8520dbdf8d4daa373bf09ccf2830952e" protocol=ttrpc version=3 May 27 17:46:37.882675 systemd[1]: Started cri-containerd-e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d.scope - libcontainer container e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d. May 27 17:46:37.981360 containerd[2001]: time="2025-05-27T17:46:37.981316286Z" level=info msg="StartContainer for \"e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d\" returns successfully" May 27 17:46:38.812132 systemd[1]: cri-containerd-e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d.scope: Deactivated successfully. May 27 17:46:38.812969 systemd[1]: cri-containerd-e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d.scope: Consumed 625ms CPU time, 163.8M memory peak, 4.1M read from disk, 170.9M written to disk. May 27 17:46:38.932110 kubelet[3458]: I0527 17:46:38.932070 3458 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:46:38.935351 containerd[2001]: time="2025-05-27T17:46:38.935241375Z" level=info msg="received exit event container_id:\"e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d\" id:\"e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d\" pid:4193 exited_at:{seconds:1748367998 nanos:934922965}" May 27 17:46:38.936360 containerd[2001]: time="2025-05-27T17:46:38.936038421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d\" id:\"e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d\" pid:4193 exited_at:{seconds:1748367998 nanos:934922965}" May 27 17:46:38.986233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e66929fed7a24213a7108fbac46307159abc493f809e00b2ded902ad6578ec9d-rootfs.mount: Deactivated successfully. May 27 17:46:39.054974 systemd[1]: Created slice kubepods-burstable-podb0ef2a5f_84e9_4ac7_9d73_b40444c1d727.slice - libcontainer container kubepods-burstable-podb0ef2a5f_84e9_4ac7_9d73_b40444c1d727.slice. May 27 17:46:39.068934 systemd[1]: Created slice kubepods-besteffort-pod7de0100b_1870_4092_b782_64ed3af7bc43.slice - libcontainer container kubepods-besteffort-pod7de0100b_1870_4092_b782_64ed3af7bc43.slice. May 27 17:46:39.090856 systemd[1]: Created slice kubepods-besteffort-pod0f4dab51_bbfc_4262_8440_a1bb42ffd9e6.slice - libcontainer container kubepods-besteffort-pod0f4dab51_bbfc_4262_8440_a1bb42ffd9e6.slice. May 27 17:46:39.106947 systemd[1]: Created slice kubepods-burstable-pod65179e5b_6608_4d3e_a0c5_74ddb29ac692.slice - libcontainer container kubepods-burstable-pod65179e5b_6608_4d3e_a0c5_74ddb29ac692.slice. May 27 17:46:39.144211 systemd[1]: Created slice kubepods-besteffort-pod3b767e35_d8b4_4e3e_b073_420a388fcfe0.slice - libcontainer container kubepods-besteffort-pod3b767e35_d8b4_4e3e_b073_420a388fcfe0.slice. May 27 17:46:39.146822 kubelet[3458]: I0527 17:46:39.145129 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0f4dab51-bbfc-4262-8440-a1bb42ffd9e6-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-ph2ft\" (UID: \"0f4dab51-bbfc-4262-8440-a1bb42ffd9e6\") " pod="calico-system/goldmane-78d55f7ddc-ph2ft" May 27 17:46:39.146822 kubelet[3458]: I0527 17:46:39.145816 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psn8w\" (UniqueName: \"kubernetes.io/projected/7de0100b-1870-4092-b782-64ed3af7bc43-kube-api-access-psn8w\") pod \"whisker-6bd79577dc-74jdj\" (UID: \"7de0100b-1870-4092-b782-64ed3af7bc43\") " pod="calico-system/whisker-6bd79577dc-74jdj" May 27 17:46:39.146822 kubelet[3458]: I0527 17:46:39.145862 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65179e5b-6608-4d3e-a0c5-74ddb29ac692-config-volume\") pod \"coredns-674b8bbfcf-zdfdl\" (UID: \"65179e5b-6608-4d3e-a0c5-74ddb29ac692\") " pod="kube-system/coredns-674b8bbfcf-zdfdl" May 27 17:46:39.146822 kubelet[3458]: I0527 17:46:39.145900 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4dab51-bbfc-4262-8440-a1bb42ffd9e6-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-ph2ft\" (UID: \"0f4dab51-bbfc-4262-8440-a1bb42ffd9e6\") " pod="calico-system/goldmane-78d55f7ddc-ph2ft" May 27 17:46:39.146822 kubelet[3458]: I0527 17:46:39.145928 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b767e35-d8b4-4e3e-b073-420a388fcfe0-tigera-ca-bundle\") pod \"calico-kube-controllers-7bd9c5fbd7-ml6ks\" (UID: \"3b767e35-d8b4-4e3e-b073-420a388fcfe0\") " pod="calico-system/calico-kube-controllers-7bd9c5fbd7-ml6ks" May 27 17:46:39.148534 kubelet[3458]: I0527 17:46:39.145952 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwxk\" (UniqueName: \"kubernetes.io/projected/65179e5b-6608-4d3e-a0c5-74ddb29ac692-kube-api-access-pcwxk\") pod \"coredns-674b8bbfcf-zdfdl\" (UID: \"65179e5b-6608-4d3e-a0c5-74ddb29ac692\") " pod="kube-system/coredns-674b8bbfcf-zdfdl" May 27 17:46:39.148534 kubelet[3458]: I0527 17:46:39.145988 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgp8\" (UniqueName: \"kubernetes.io/projected/0f4dab51-bbfc-4262-8440-a1bb42ffd9e6-kube-api-access-bwgp8\") pod \"goldmane-78d55f7ddc-ph2ft\" (UID: \"0f4dab51-bbfc-4262-8440-a1bb42ffd9e6\") " pod="calico-system/goldmane-78d55f7ddc-ph2ft" May 27 17:46:39.148534 kubelet[3458]: I0527 17:46:39.146013 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql62x\" (UniqueName: \"kubernetes.io/projected/b0ef2a5f-84e9-4ac7-9d73-b40444c1d727-kube-api-access-ql62x\") pod \"coredns-674b8bbfcf-7cz9n\" (UID: \"b0ef2a5f-84e9-4ac7-9d73-b40444c1d727\") " pod="kube-system/coredns-674b8bbfcf-7cz9n" May 27 17:46:39.148534 kubelet[3458]: I0527 17:46:39.146046 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4dab51-bbfc-4262-8440-a1bb42ffd9e6-config\") pod \"goldmane-78d55f7ddc-ph2ft\" (UID: \"0f4dab51-bbfc-4262-8440-a1bb42ffd9e6\") " pod="calico-system/goldmane-78d55f7ddc-ph2ft" May 27 17:46:39.148534 kubelet[3458]: I0527 17:46:39.146071 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d58xg\" (UniqueName: \"kubernetes.io/projected/3b767e35-d8b4-4e3e-b073-420a388fcfe0-kube-api-access-d58xg\") pod \"calico-kube-controllers-7bd9c5fbd7-ml6ks\" (UID: \"3b767e35-d8b4-4e3e-b073-420a388fcfe0\") " pod="calico-system/calico-kube-controllers-7bd9c5fbd7-ml6ks" May 27 17:46:39.148754 kubelet[3458]: I0527 17:46:39.146099 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0ef2a5f-84e9-4ac7-9d73-b40444c1d727-config-volume\") pod \"coredns-674b8bbfcf-7cz9n\" (UID: \"b0ef2a5f-84e9-4ac7-9d73-b40444c1d727\") " pod="kube-system/coredns-674b8bbfcf-7cz9n" May 27 17:46:39.148754 kubelet[3458]: I0527 17:46:39.146124 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-backend-key-pair\") pod \"whisker-6bd79577dc-74jdj\" (UID: \"7de0100b-1870-4092-b782-64ed3af7bc43\") " pod="calico-system/whisker-6bd79577dc-74jdj" May 27 17:46:39.148754 kubelet[3458]: I0527 17:46:39.146148 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-ca-bundle\") pod \"whisker-6bd79577dc-74jdj\" (UID: \"7de0100b-1870-4092-b782-64ed3af7bc43\") " pod="calico-system/whisker-6bd79577dc-74jdj" May 27 17:46:39.162717 systemd[1]: Created slice kubepods-besteffort-pod7f88ba61_a3fb_420e_a5cb_af0c342ffeab.slice - libcontainer container kubepods-besteffort-pod7f88ba61_a3fb_420e_a5cb_af0c342ffeab.slice. May 27 17:46:39.171604 systemd[1]: Created slice kubepods-besteffort-pod7d203b3c_bbe7_4091_9e00_dea858c46678.slice - libcontainer container kubepods-besteffort-pod7d203b3c_bbe7_4091_9e00_dea858c46678.slice. May 27 17:46:39.246726 kubelet[3458]: I0527 17:46:39.246684 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f88ba61-a3fb-420e-a5cb-af0c342ffeab-calico-apiserver-certs\") pod \"calico-apiserver-df9788ff4-5xfh7\" (UID: \"7f88ba61-a3fb-420e-a5cb-af0c342ffeab\") " pod="calico-apiserver/calico-apiserver-df9788ff4-5xfh7" May 27 17:46:39.246938 kubelet[3458]: I0527 17:46:39.246894 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d203b3c-bbe7-4091-9e00-dea858c46678-calico-apiserver-certs\") pod \"calico-apiserver-df9788ff4-fl2mg\" (UID: \"7d203b3c-bbe7-4091-9e00-dea858c46678\") " pod="calico-apiserver/calico-apiserver-df9788ff4-fl2mg" May 27 17:46:39.247078 kubelet[3458]: I0527 17:46:39.246959 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fls\" (UniqueName: \"kubernetes.io/projected/7f88ba61-a3fb-420e-a5cb-af0c342ffeab-kube-api-access-c7fls\") pod \"calico-apiserver-df9788ff4-5xfh7\" (UID: \"7f88ba61-a3fb-420e-a5cb-af0c342ffeab\") " pod="calico-apiserver/calico-apiserver-df9788ff4-5xfh7" May 27 17:46:39.247078 kubelet[3458]: I0527 17:46:39.246978 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckn7q\" (UniqueName: \"kubernetes.io/projected/7d203b3c-bbe7-4091-9e00-dea858c46678-kube-api-access-ckn7q\") pod \"calico-apiserver-df9788ff4-fl2mg\" (UID: \"7d203b3c-bbe7-4091-9e00-dea858c46678\") " pod="calico-apiserver/calico-apiserver-df9788ff4-fl2mg" May 27 17:46:39.379717 containerd[2001]: time="2025-05-27T17:46:39.379587731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7cz9n,Uid:b0ef2a5f-84e9-4ac7-9d73-b40444c1d727,Namespace:kube-system,Attempt:0,}" May 27 17:46:39.393939 containerd[2001]: time="2025-05-27T17:46:39.393893490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bd79577dc-74jdj,Uid:7de0100b-1870-4092-b782-64ed3af7bc43,Namespace:calico-system,Attempt:0,}" May 27 17:46:39.411121 containerd[2001]: time="2025-05-27T17:46:39.411075258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ph2ft,Uid:0f4dab51-bbfc-4262-8440-a1bb42ffd9e6,Namespace:calico-system,Attempt:0,}" May 27 17:46:39.436890 containerd[2001]: time="2025-05-27T17:46:39.436534660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zdfdl,Uid:65179e5b-6608-4d3e-a0c5-74ddb29ac692,Namespace:kube-system,Attempt:0,}" May 27 17:46:39.456268 containerd[2001]: time="2025-05-27T17:46:39.456222227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9c5fbd7-ml6ks,Uid:3b767e35-d8b4-4e3e-b073-420a388fcfe0,Namespace:calico-system,Attempt:0,}" May 27 17:46:39.475454 containerd[2001]: time="2025-05-27T17:46:39.475399202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-5xfh7,Uid:7f88ba61-a3fb-420e-a5cb-af0c342ffeab,Namespace:calico-apiserver,Attempt:0,}" May 27 17:46:39.482992 containerd[2001]: time="2025-05-27T17:46:39.482942363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-fl2mg,Uid:7d203b3c-bbe7-4091-9e00-dea858c46678,Namespace:calico-apiserver,Attempt:0,}" May 27 17:46:39.512570 systemd[1]: Created slice kubepods-besteffort-podb0f763d7_ff63_4f3f_93a2_96ef656a46c5.slice - libcontainer container kubepods-besteffort-podb0f763d7_ff63_4f3f_93a2_96ef656a46c5.slice. May 27 17:46:39.517909 containerd[2001]: time="2025-05-27T17:46:39.517867939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2sxmb,Uid:b0f763d7-ff63-4f3f-93a2-96ef656a46c5,Namespace:calico-system,Attempt:0,}" May 27 17:46:39.731051 containerd[2001]: time="2025-05-27T17:46:39.730686172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:46:39.966791 containerd[2001]: time="2025-05-27T17:46:39.966627929Z" level=error msg="Failed to destroy network for sandbox \"80894bbe67115d3368b7f89475396429bf01960b8eaa79770d678872629d5375\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.020933 containerd[2001]: time="2025-05-27T17:46:39.971866818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-5xfh7,Uid:7f88ba61-a3fb-420e-a5cb-af0c342ffeab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80894bbe67115d3368b7f89475396429bf01960b8eaa79770d678872629d5375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.024348 kubelet[3458]: E0527 17:46:40.021673 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80894bbe67115d3368b7f89475396429bf01960b8eaa79770d678872629d5375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.024348 kubelet[3458]: E0527 17:46:40.021751 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80894bbe67115d3368b7f89475396429bf01960b8eaa79770d678872629d5375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-df9788ff4-5xfh7" May 27 17:46:40.024348 kubelet[3458]: E0527 17:46:40.021781 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80894bbe67115d3368b7f89475396429bf01960b8eaa79770d678872629d5375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-df9788ff4-5xfh7" May 27 17:46:40.025028 kubelet[3458]: E0527 17:46:40.021844 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-df9788ff4-5xfh7_calico-apiserver(7f88ba61-a3fb-420e-a5cb-af0c342ffeab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-df9788ff4-5xfh7_calico-apiserver(7f88ba61-a3fb-420e-a5cb-af0c342ffeab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80894bbe67115d3368b7f89475396429bf01960b8eaa79770d678872629d5375\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-df9788ff4-5xfh7" podUID="7f88ba61-a3fb-420e-a5cb-af0c342ffeab" May 27 17:46:40.054035 containerd[2001]: time="2025-05-27T17:46:40.053252416Z" level=error msg="Failed to destroy network for sandbox \"70cc06ef00e3490560b110dcfac99de89bbd4e626184e70410b38dd98e655eae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.058312 systemd[1]: run-netns-cni\x2de66593ea\x2d272a\x2d7387\x2d9098\x2d8e1ede29d01c.mount: Deactivated successfully. May 27 17:46:40.066783 containerd[2001]: time="2025-05-27T17:46:40.066735326Z" level=error msg="Failed to destroy network for sandbox \"3b5ce841cedd60382c17ee6d65d8f59fcdfcd815ecd78f99486ce15c541f165b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.071089 systemd[1]: run-netns-cni\x2d4df5df28\x2d17d5\x2d8f3c\x2d6eb5\x2de2254c4a77b2.mount: Deactivated successfully. May 27 17:46:40.096064 containerd[2001]: time="2025-05-27T17:46:40.095910331Z" level=error msg="Failed to destroy network for sandbox \"e3300ca64b0363a8091cb2d7552e268398534c8d45687f79f1b499de56e434c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.101215 systemd[1]: run-netns-cni\x2d02423f87\x2da275\x2d8c68\x2da58d\x2deb2923880794.mount: Deactivated successfully. May 27 17:46:40.104993 containerd[2001]: time="2025-05-27T17:46:40.101197276Z" level=error msg="Failed to destroy network for sandbox \"d1735163a838104505fd74e003f961e8d0795cf6b16d389c41ad8d7ef8a3a2d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.106832 systemd[1]: run-netns-cni\x2ddb426229\x2d7c8c\x2d7501\x2da549\x2ddd1f3844c3bc.mount: Deactivated successfully. May 27 17:46:40.110554 containerd[2001]: time="2025-05-27T17:46:40.110493560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7cz9n,Uid:b0ef2a5f-84e9-4ac7-9d73-b40444c1d727,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cc06ef00e3490560b110dcfac99de89bbd4e626184e70410b38dd98e655eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.112358 kubelet[3458]: E0527 17:46:40.112300 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cc06ef00e3490560b110dcfac99de89bbd4e626184e70410b38dd98e655eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.112643 kubelet[3458]: E0527 17:46:40.112619 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cc06ef00e3490560b110dcfac99de89bbd4e626184e70410b38dd98e655eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7cz9n" May 27 17:46:40.112771 kubelet[3458]: E0527 17:46:40.112753 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cc06ef00e3490560b110dcfac99de89bbd4e626184e70410b38dd98e655eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7cz9n" May 27 17:46:40.112882 containerd[2001]: time="2025-05-27T17:46:40.112836559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9c5fbd7-ml6ks,Uid:3b767e35-d8b4-4e3e-b073-420a388fcfe0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b5ce841cedd60382c17ee6d65d8f59fcdfcd815ecd78f99486ce15c541f165b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.113088 kubelet[3458]: E0527 17:46:40.113047 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7cz9n_kube-system(b0ef2a5f-84e9-4ac7-9d73-b40444c1d727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7cz9n_kube-system(b0ef2a5f-84e9-4ac7-9d73-b40444c1d727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70cc06ef00e3490560b110dcfac99de89bbd4e626184e70410b38dd98e655eae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7cz9n" podUID="b0ef2a5f-84e9-4ac7-9d73-b40444c1d727" May 27 17:46:40.114948 kubelet[3458]: E0527 17:46:40.113672 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b5ce841cedd60382c17ee6d65d8f59fcdfcd815ecd78f99486ce15c541f165b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.115342 kubelet[3458]: E0527 17:46:40.115088 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b5ce841cedd60382c17ee6d65d8f59fcdfcd815ecd78f99486ce15c541f165b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd9c5fbd7-ml6ks" May 27 17:46:40.115342 kubelet[3458]: E0527 17:46:40.115286 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b5ce841cedd60382c17ee6d65d8f59fcdfcd815ecd78f99486ce15c541f165b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd9c5fbd7-ml6ks" May 27 17:46:40.115565 containerd[2001]: time="2025-05-27T17:46:40.115384067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zdfdl,Uid:65179e5b-6608-4d3e-a0c5-74ddb29ac692,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3300ca64b0363a8091cb2d7552e268398534c8d45687f79f1b499de56e434c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.116012 kubelet[3458]: E0527 17:46:40.115490 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bd9c5fbd7-ml6ks_calico-system(3b767e35-d8b4-4e3e-b073-420a388fcfe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bd9c5fbd7-ml6ks_calico-system(3b767e35-d8b4-4e3e-b073-420a388fcfe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b5ce841cedd60382c17ee6d65d8f59fcdfcd815ecd78f99486ce15c541f165b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bd9c5fbd7-ml6ks" podUID="3b767e35-d8b4-4e3e-b073-420a388fcfe0" May 27 17:46:40.116125 containerd[2001]: time="2025-05-27T17:46:40.115858236Z" level=error msg="Failed to destroy network for sandbox \"0fcf2086bbd2aa5905aeab8e8619eebd86327e09b08a99fa48534578c90dcc00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.117542 kubelet[3458]: E0527 17:46:40.117099 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3300ca64b0363a8091cb2d7552e268398534c8d45687f79f1b499de56e434c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.117542 kubelet[3458]: E0527 17:46:40.117152 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3300ca64b0363a8091cb2d7552e268398534c8d45687f79f1b499de56e434c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zdfdl" May 27 17:46:40.117542 kubelet[3458]: E0527 17:46:40.117181 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3300ca64b0363a8091cb2d7552e268398534c8d45687f79f1b499de56e434c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zdfdl" May 27 17:46:40.117898 kubelet[3458]: E0527 17:46:40.117765 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zdfdl_kube-system(65179e5b-6608-4d3e-a0c5-74ddb29ac692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zdfdl_kube-system(65179e5b-6608-4d3e-a0c5-74ddb29ac692)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3300ca64b0363a8091cb2d7552e268398534c8d45687f79f1b499de56e434c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zdfdl" podUID="65179e5b-6608-4d3e-a0c5-74ddb29ac692" May 27 17:46:40.119906 containerd[2001]: time="2025-05-27T17:46:40.119853136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-fl2mg,Uid:7d203b3c-bbe7-4091-9e00-dea858c46678,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1735163a838104505fd74e003f961e8d0795cf6b16d389c41ad8d7ef8a3a2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.120662 kubelet[3458]: E0527 17:46:40.120574 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1735163a838104505fd74e003f961e8d0795cf6b16d389c41ad8d7ef8a3a2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.120828 kubelet[3458]: E0527 17:46:40.120775 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1735163a838104505fd74e003f961e8d0795cf6b16d389c41ad8d7ef8a3a2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-df9788ff4-fl2mg" May 27 17:46:40.120828 kubelet[3458]: E0527 17:46:40.120803 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1735163a838104505fd74e003f961e8d0795cf6b16d389c41ad8d7ef8a3a2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-df9788ff4-fl2mg" May 27 17:46:40.121093 kubelet[3458]: E0527 17:46:40.121061 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-df9788ff4-fl2mg_calico-apiserver(7d203b3c-bbe7-4091-9e00-dea858c46678)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-df9788ff4-fl2mg_calico-apiserver(7d203b3c-bbe7-4091-9e00-dea858c46678)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1735163a838104505fd74e003f961e8d0795cf6b16d389c41ad8d7ef8a3a2d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-df9788ff4-fl2mg" podUID="7d203b3c-bbe7-4091-9e00-dea858c46678" May 27 17:46:40.121204 containerd[2001]: time="2025-05-27T17:46:40.121107293Z" level=error msg="Failed to destroy network for sandbox \"6b3cba2d39eeed05ec5872607669770b6b5151d9cfcc77ed13f116249c2702c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.122093 containerd[2001]: time="2025-05-27T17:46:40.121983839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2sxmb,Uid:b0f763d7-ff63-4f3f-93a2-96ef656a46c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fcf2086bbd2aa5905aeab8e8619eebd86327e09b08a99fa48534578c90dcc00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.122454 kubelet[3458]: E0527 17:46:40.122335 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fcf2086bbd2aa5905aeab8e8619eebd86327e09b08a99fa48534578c90dcc00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.122454 kubelet[3458]: E0527 17:46:40.122385 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fcf2086bbd2aa5905aeab8e8619eebd86327e09b08a99fa48534578c90dcc00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2sxmb" May 27 17:46:40.122454 kubelet[3458]: E0527 17:46:40.122413 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fcf2086bbd2aa5905aeab8e8619eebd86327e09b08a99fa48534578c90dcc00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2sxmb" May 27 17:46:40.122637 kubelet[3458]: E0527 17:46:40.122531 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2sxmb_calico-system(b0f763d7-ff63-4f3f-93a2-96ef656a46c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2sxmb_calico-system(b0f763d7-ff63-4f3f-93a2-96ef656a46c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fcf2086bbd2aa5905aeab8e8619eebd86327e09b08a99fa48534578c90dcc00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2sxmb" podUID="b0f763d7-ff63-4f3f-93a2-96ef656a46c5" May 27 17:46:40.124589 containerd[2001]: time="2025-05-27T17:46:40.124323412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bd79577dc-74jdj,Uid:7de0100b-1870-4092-b782-64ed3af7bc43,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3cba2d39eeed05ec5872607669770b6b5151d9cfcc77ed13f116249c2702c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.125830 kubelet[3458]: E0527 17:46:40.125496 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3cba2d39eeed05ec5872607669770b6b5151d9cfcc77ed13f116249c2702c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.125830 kubelet[3458]: E0527 17:46:40.125547 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3cba2d39eeed05ec5872607669770b6b5151d9cfcc77ed13f116249c2702c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bd79577dc-74jdj" May 27 17:46:40.125830 kubelet[3458]: E0527 17:46:40.125569 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3cba2d39eeed05ec5872607669770b6b5151d9cfcc77ed13f116249c2702c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bd79577dc-74jdj" May 27 17:46:40.126028 kubelet[3458]: E0527 17:46:40.125627 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bd79577dc-74jdj_calico-system(7de0100b-1870-4092-b782-64ed3af7bc43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bd79577dc-74jdj_calico-system(7de0100b-1870-4092-b782-64ed3af7bc43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b3cba2d39eeed05ec5872607669770b6b5151d9cfcc77ed13f116249c2702c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bd79577dc-74jdj" podUID="7de0100b-1870-4092-b782-64ed3af7bc43" May 27 17:46:40.127515 containerd[2001]: time="2025-05-27T17:46:40.127479078Z" level=error msg="Failed to destroy network for sandbox \"885d1f310260fa9ce85c60daa9af54a02f517790d878d379862f83c045110b42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.130187 containerd[2001]: time="2025-05-27T17:46:40.130133246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ph2ft,Uid:0f4dab51-bbfc-4262-8440-a1bb42ffd9e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"885d1f310260fa9ce85c60daa9af54a02f517790d878d379862f83c045110b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.130595 kubelet[3458]: E0527 17:46:40.130549 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"885d1f310260fa9ce85c60daa9af54a02f517790d878d379862f83c045110b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:40.130767 kubelet[3458]: E0527 17:46:40.130607 3458 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"885d1f310260fa9ce85c60daa9af54a02f517790d878d379862f83c045110b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-ph2ft" May 27 17:46:40.130767 kubelet[3458]: E0527 17:46:40.130630 3458 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"885d1f310260fa9ce85c60daa9af54a02f517790d878d379862f83c045110b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-ph2ft" May 27 17:46:40.130767 kubelet[3458]: E0527 17:46:40.130701 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-ph2ft_calico-system(0f4dab51-bbfc-4262-8440-a1bb42ffd9e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-ph2ft_calico-system(0f4dab51-bbfc-4262-8440-a1bb42ffd9e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"885d1f310260fa9ce85c60daa9af54a02f517790d878d379862f83c045110b42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:46:40.984933 systemd[1]: run-netns-cni\x2d81cc6d32\x2d69f1\x2d45cd\x2d6e84\x2dd8dca2d2320d.mount: Deactivated successfully. May 27 17:46:40.985251 systemd[1]: run-netns-cni\x2d1fd55278\x2d44ed\x2d2008\x2dc3b3\x2d0d72fac107ce.mount: Deactivated successfully. May 27 17:46:40.985419 systemd[1]: run-netns-cni\x2d3a061b64\x2d0fb3\x2d1428\x2d50f8\x2d2054f0cdea28.mount: Deactivated successfully. May 27 17:46:46.138822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299468247.mount: Deactivated successfully. May 27 17:46:46.215883 containerd[2001]: time="2025-05-27T17:46:46.215835114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:46.219884 containerd[2001]: time="2025-05-27T17:46:46.219807043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:46:46.232240 containerd[2001]: time="2025-05-27T17:46:46.232179163Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:46.233679 containerd[2001]: time="2025-05-27T17:46:46.233033936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:46.234747 containerd[2001]: time="2025-05-27T17:46:46.234710299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.502824454s" May 27 17:46:46.234892 containerd[2001]: time="2025-05-27T17:46:46.234875567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 17:46:46.278727 containerd[2001]: time="2025-05-27T17:46:46.278681289Z" level=info msg="CreateContainer within sandbox \"ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:46:46.365645 containerd[2001]: time="2025-05-27T17:46:46.365539332Z" level=info msg="Container 5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:46.366227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665779340.mount: Deactivated successfully. May 27 17:46:46.468744 containerd[2001]: time="2025-05-27T17:46:46.468373117Z" level=info msg="CreateContainer within sandbox \"ac0e7b4218fc9e6304f98cf17e176eaa7c634a31031b0fd02d61241fa09b9080\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5\"" May 27 17:46:46.469976 containerd[2001]: time="2025-05-27T17:46:46.469753782Z" level=info msg="StartContainer for \"5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5\"" May 27 17:46:46.480161 containerd[2001]: time="2025-05-27T17:46:46.479411007Z" level=info msg="connecting to shim 5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5" address="unix:///run/containerd/s/700d0273561ed625c22211e4482d163e8520dbdf8d4daa373bf09ccf2830952e" protocol=ttrpc version=3 May 27 17:46:46.684852 systemd[1]: Started cri-containerd-5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5.scope - libcontainer container 5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5. May 27 17:46:46.779466 containerd[2001]: time="2025-05-27T17:46:46.779008917Z" level=info msg="StartContainer for \"5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5\" returns successfully" May 27 17:46:46.919610 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:46:46.920361 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:46:47.415457 kubelet[3458]: I0527 17:46:47.415178 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-ca-bundle\") pod \"7de0100b-1870-4092-b782-64ed3af7bc43\" (UID: \"7de0100b-1870-4092-b782-64ed3af7bc43\") " May 27 17:46:47.415457 kubelet[3458]: I0527 17:46:47.415235 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psn8w\" (UniqueName: \"kubernetes.io/projected/7de0100b-1870-4092-b782-64ed3af7bc43-kube-api-access-psn8w\") pod \"7de0100b-1870-4092-b782-64ed3af7bc43\" (UID: \"7de0100b-1870-4092-b782-64ed3af7bc43\") " May 27 17:46:47.415457 kubelet[3458]: I0527 17:46:47.415293 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-backend-key-pair\") pod \"7de0100b-1870-4092-b782-64ed3af7bc43\" (UID: \"7de0100b-1870-4092-b782-64ed3af7bc43\") " May 27 17:46:47.416701 kubelet[3458]: I0527 17:46:47.416663 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7de0100b-1870-4092-b782-64ed3af7bc43" (UID: "7de0100b-1870-4092-b782-64ed3af7bc43"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:46:47.444473 kubelet[3458]: I0527 17:46:47.442553 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de0100b-1870-4092-b782-64ed3af7bc43-kube-api-access-psn8w" (OuterVolumeSpecName: "kube-api-access-psn8w") pod "7de0100b-1870-4092-b782-64ed3af7bc43" (UID: "7de0100b-1870-4092-b782-64ed3af7bc43"). InnerVolumeSpecName "kube-api-access-psn8w". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:46:47.444473 kubelet[3458]: I0527 17:46:47.443269 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7de0100b-1870-4092-b782-64ed3af7bc43" (UID: "7de0100b-1870-4092-b782-64ed3af7bc43"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:46:47.444830 systemd[1]: var-lib-kubelet-pods-7de0100b\x2d1870\x2d4092\x2db782\x2d64ed3af7bc43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsn8w.mount: Deactivated successfully. May 27 17:46:47.450729 systemd[1]: var-lib-kubelet-pods-7de0100b\x2d1870\x2d4092\x2db782\x2d64ed3af7bc43-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:46:47.515891 kubelet[3458]: I0527 17:46:47.515850 3458 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-backend-key-pair\") on node \"ip-172-31-23-101\" DevicePath \"\"" May 27 17:46:47.515891 kubelet[3458]: I0527 17:46:47.515889 3458 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7de0100b-1870-4092-b782-64ed3af7bc43-whisker-ca-bundle\") on node \"ip-172-31-23-101\" DevicePath \"\"" May 27 17:46:47.515891 kubelet[3458]: I0527 17:46:47.515901 3458 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-psn8w\" (UniqueName: \"kubernetes.io/projected/7de0100b-1870-4092-b782-64ed3af7bc43-kube-api-access-psn8w\") on node \"ip-172-31-23-101\" DevicePath \"\"" May 27 17:46:47.522504 systemd[1]: Removed slice kubepods-besteffort-pod7de0100b_1870_4092_b782_64ed3af7bc43.slice - libcontainer container kubepods-besteffort-pod7de0100b_1870_4092_b782_64ed3af7bc43.slice. May 27 17:46:47.783609 kubelet[3458]: I0527 17:46:47.778771 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sf7vs" podStartSLOduration=2.295071894 podStartE2EDuration="18.77874988s" podCreationTimestamp="2025-05-27 17:46:29 +0000 UTC" firstStartedPulling="2025-05-27 17:46:29.755219407 +0000 UTC m=+22.431586530" lastFinishedPulling="2025-05-27 17:46:46.238897395 +0000 UTC m=+38.915264516" observedRunningTime="2025-05-27 17:46:47.770332446 +0000 UTC m=+40.446699590" watchObservedRunningTime="2025-05-27 17:46:47.77874988 +0000 UTC m=+40.455117021" May 27 17:46:47.957515 systemd[1]: Created slice kubepods-besteffort-pod901818f8_0dca_4e36_a31e_1e343d7aa13a.slice - libcontainer container kubepods-besteffort-pod901818f8_0dca_4e36_a31e_1e343d7aa13a.slice. May 27 17:46:48.019664 kubelet[3458]: I0527 17:46:48.019563 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/901818f8-0dca-4e36-a31e-1e343d7aa13a-whisker-ca-bundle\") pod \"whisker-5dff597dcf-9vfrd\" (UID: \"901818f8-0dca-4e36-a31e-1e343d7aa13a\") " pod="calico-system/whisker-5dff597dcf-9vfrd" May 27 17:46:48.020291 kubelet[3458]: I0527 17:46:48.019679 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hd9\" (UniqueName: \"kubernetes.io/projected/901818f8-0dca-4e36-a31e-1e343d7aa13a-kube-api-access-t6hd9\") pod \"whisker-5dff597dcf-9vfrd\" (UID: \"901818f8-0dca-4e36-a31e-1e343d7aa13a\") " pod="calico-system/whisker-5dff597dcf-9vfrd" May 27 17:46:48.020291 kubelet[3458]: I0527 17:46:48.019750 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/901818f8-0dca-4e36-a31e-1e343d7aa13a-whisker-backend-key-pair\") pod \"whisker-5dff597dcf-9vfrd\" (UID: \"901818f8-0dca-4e36-a31e-1e343d7aa13a\") " pod="calico-system/whisker-5dff597dcf-9vfrd" May 27 17:46:48.117074 containerd[2001]: time="2025-05-27T17:46:48.117024886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5\" id:\"b2139d70c70dd568fdbe9388d3ad263a0bcfec2d1147c2110463b91c7e0b6f94\" pid:4535 exit_status:1 exited_at:{seconds:1748368008 nanos:116653183}" May 27 17:46:48.267858 containerd[2001]: time="2025-05-27T17:46:48.267532080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dff597dcf-9vfrd,Uid:901818f8-0dca-4e36-a31e-1e343d7aa13a,Namespace:calico-system,Attempt:0,}" May 27 17:46:49.028492 (udev-worker)[4496]: Network interface NamePolicy= disabled on kernel command line. May 27 17:46:49.034581 systemd-networkd[1817]: calieefe9eee4a6: Link UP May 27 17:46:49.037538 systemd-networkd[1817]: calieefe9eee4a6: Gained carrier May 27 17:46:49.075351 containerd[2001]: 2025-05-27 17:46:48.319 [INFO][4554] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:49.075351 containerd[2001]: 2025-05-27 17:46:48.404 [INFO][4554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0 whisker-5dff597dcf- calico-system 901818f8-0dca-4e36-a31e-1e343d7aa13a 885 0 2025-05-27 17:46:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dff597dcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-101 whisker-5dff597dcf-9vfrd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieefe9eee4a6 [] [] }} ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-" May 27 17:46:49.075351 containerd[2001]: 2025-05-27 17:46:48.404 [INFO][4554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" May 27 17:46:49.075351 containerd[2001]: 2025-05-27 17:46:48.891 [INFO][4562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" HandleID="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Workload="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.896 [INFO][4562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" HandleID="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Workload="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-101", "pod":"whisker-5dff597dcf-9vfrd", "timestamp":"2025-05-27 17:46:48.891829303 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.896 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.896 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.896 [INFO][4562] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.934 [INFO][4562] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" host="ip-172-31-23-101" May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.956 [INFO][4562] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.965 [INFO][4562] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.969 [INFO][4562] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:49.076032 containerd[2001]: 2025-05-27 17:46:48.973 [INFO][4562] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:49.076578 containerd[2001]: 2025-05-27 17:46:48.973 [INFO][4562] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" host="ip-172-31-23-101" May 27 17:46:49.076578 containerd[2001]: 2025-05-27 17:46:48.976 [INFO][4562] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6 May 27 17:46:49.076578 containerd[2001]: 2025-05-27 17:46:48.986 [INFO][4562] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" host="ip-172-31-23-101" May 27 17:46:49.076578 containerd[2001]: 2025-05-27 17:46:48.997 [INFO][4562] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.1/26] block=192.168.92.0/26 handle="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" host="ip-172-31-23-101" May 27 17:46:49.076578 containerd[2001]: 2025-05-27 17:46:48.998 [INFO][4562] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.1/26] handle="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" host="ip-172-31-23-101" May 27 17:46:49.076578 containerd[2001]: 2025-05-27 17:46:48.998 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:49.076578 containerd[2001]: 2025-05-27 17:46:48.998 [INFO][4562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.1/26] IPv6=[] ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" HandleID="k8s-pod-network.ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Workload="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" May 27 17:46:49.076936 containerd[2001]: 2025-05-27 17:46:49.007 [INFO][4554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0", GenerateName:"whisker-5dff597dcf-", Namespace:"calico-system", SelfLink:"", UID:"901818f8-0dca-4e36-a31e-1e343d7aa13a", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dff597dcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"whisker-5dff597dcf-9vfrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieefe9eee4a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:49.076936 containerd[2001]: 2025-05-27 17:46:49.008 [INFO][4554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.1/32] ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" May 27 17:46:49.077112 containerd[2001]: 2025-05-27 17:46:49.008 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieefe9eee4a6 ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" May 27 17:46:49.077112 containerd[2001]: 2025-05-27 17:46:49.037 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" May 27 17:46:49.077204 containerd[2001]: 2025-05-27 17:46:49.037 [INFO][4554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0", GenerateName:"whisker-5dff597dcf-", Namespace:"calico-system", SelfLink:"", UID:"901818f8-0dca-4e36-a31e-1e343d7aa13a", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dff597dcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6", Pod:"whisker-5dff597dcf-9vfrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieefe9eee4a6", MAC:"e2:97:90:cd:e8:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:49.077335 containerd[2001]: 2025-05-27 17:46:49.061 [INFO][4554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" Namespace="calico-system" Pod="whisker-5dff597dcf-9vfrd" WorkloadEndpoint="ip--172--31--23--101-k8s-whisker--5dff597dcf--9vfrd-eth0" May 27 17:46:49.346017 containerd[2001]: time="2025-05-27T17:46:49.345972704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5\" id:\"83daffc357b8408d7dc736746f3e22118097be1809083c1f5467bb3732608905\" pid:4666 exit_status:1 exited_at:{seconds:1748368009 nanos:345679531}" May 27 17:46:49.379828 containerd[2001]: time="2025-05-27T17:46:49.379732804Z" level=info msg="connecting to shim ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6" address="unix:///run/containerd/s/33498851cbd7af28be3ffb2ff1febbf597e7968eeae959819c7166039bc139fc" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:49.418740 systemd[1]: Started cri-containerd-ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6.scope - libcontainer container ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6. May 27 17:46:49.499195 containerd[2001]: time="2025-05-27T17:46:49.499124030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dff597dcf-9vfrd,Uid:901818f8-0dca-4e36-a31e-1e343d7aa13a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef6460647ca371f9d0116f2853c091c17097ba2f1ae2ddb953dcc8cbe598d8b6\"" May 27 17:46:49.502863 containerd[2001]: time="2025-05-27T17:46:49.502811131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:46:49.505515 kubelet[3458]: I0527 17:46:49.505412 3458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de0100b-1870-4092-b782-64ed3af7bc43" path="/var/lib/kubelet/pods/7de0100b-1870-4092-b782-64ed3af7bc43/volumes" May 27 17:46:49.737602 containerd[2001]: time="2025-05-27T17:46:49.737380667Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:46:49.740209 containerd[2001]: time="2025-05-27T17:46:49.740052343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:46:49.740592 containerd[2001]: time="2025-05-27T17:46:49.740093867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:46:49.740636 kubelet[3458]: E0527 17:46:49.740524 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:46:49.740636 kubelet[3458]: E0527 17:46:49.740591 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:46:49.757267 kubelet[3458]: E0527 17:46:49.757200 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c527f9838fbc47129edb353dd886be2b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:46:49.760069 containerd[2001]: time="2025-05-27T17:46:49.760032441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:46:49.972001 containerd[2001]: time="2025-05-27T17:46:49.971823749Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:46:49.974393 containerd[2001]: time="2025-05-27T17:46:49.974260258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:46:49.974393 containerd[2001]: time="2025-05-27T17:46:49.974367679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:46:49.974686 kubelet[3458]: E0527 17:46:49.974575 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:46:49.974686 kubelet[3458]: E0527 17:46:49.974621 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:46:49.974873 kubelet[3458]: E0527 17:46:49.974729 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:46:49.976872 kubelet[3458]: E0527 17:46:49.976679 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:46:50.195694 systemd-networkd[1817]: calieefe9eee4a6: Gained IPv6LL May 27 17:46:50.503024 containerd[2001]: time="2025-05-27T17:46:50.502893582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2sxmb,Uid:b0f763d7-ff63-4f3f-93a2-96ef656a46c5,Namespace:calico-system,Attempt:0,}" May 27 17:46:50.637710 systemd-networkd[1817]: calib5a12721545: Link UP May 27 17:46:50.639040 systemd-networkd[1817]: calib5a12721545: Gained carrier May 27 17:46:50.660246 containerd[2001]: 2025-05-27 17:46:50.531 [INFO][4765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:50.660246 containerd[2001]: 2025-05-27 17:46:50.545 [INFO][4765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0 csi-node-driver- calico-system b0f763d7-ff63-4f3f-93a2-96ef656a46c5 708 0 2025-05-27 17:46:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-101 csi-node-driver-2sxmb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib5a12721545 [] [] }} ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-" May 27 17:46:50.660246 containerd[2001]: 2025-05-27 17:46:50.545 [INFO][4765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" May 27 17:46:50.660246 containerd[2001]: 2025-05-27 17:46:50.585 [INFO][4777] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" HandleID="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Workload="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.585 [INFO][4777] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" HandleID="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Workload="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9030), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-101", "pod":"csi-node-driver-2sxmb", "timestamp":"2025-05-27 17:46:50.58573786 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.586 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.586 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.586 [INFO][4777] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.594 [INFO][4777] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" host="ip-172-31-23-101" May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.600 [INFO][4777] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.606 [INFO][4777] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.608 [INFO][4777] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.613 [INFO][4777] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:50.660603 containerd[2001]: 2025-05-27 17:46:50.613 [INFO][4777] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" host="ip-172-31-23-101" May 27 17:46:50.661622 containerd[2001]: 2025-05-27 17:46:50.615 [INFO][4777] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1 May 27 17:46:50.661622 containerd[2001]: 2025-05-27 17:46:50.620 [INFO][4777] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" host="ip-172-31-23-101" May 27 17:46:50.661622 containerd[2001]: 2025-05-27 17:46:50.629 [INFO][4777] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.2/26] block=192.168.92.0/26 handle="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" host="ip-172-31-23-101" May 27 17:46:50.661622 containerd[2001]: 2025-05-27 17:46:50.629 [INFO][4777] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.2/26] handle="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" host="ip-172-31-23-101" May 27 17:46:50.661622 containerd[2001]: 2025-05-27 17:46:50.629 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:50.661622 containerd[2001]: 2025-05-27 17:46:50.629 [INFO][4777] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.2/26] IPv6=[] ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" HandleID="k8s-pod-network.006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Workload="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" May 27 17:46:50.661847 containerd[2001]: 2025-05-27 17:46:50.633 [INFO][4765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0f763d7-ff63-4f3f-93a2-96ef656a46c5", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"csi-node-driver-2sxmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a12721545", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:50.661959 containerd[2001]: 2025-05-27 17:46:50.633 [INFO][4765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.2/32] ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" May 27 17:46:50.661959 containerd[2001]: 2025-05-27 17:46:50.633 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5a12721545 ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" May 27 17:46:50.661959 containerd[2001]: 2025-05-27 17:46:50.638 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" May 27 17:46:50.662272 containerd[2001]: 2025-05-27 17:46:50.640 [INFO][4765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0f763d7-ff63-4f3f-93a2-96ef656a46c5", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1", Pod:"csi-node-driver-2sxmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a12721545", MAC:"2a:07:ce:de:35:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:50.662540 containerd[2001]: 2025-05-27 17:46:50.654 [INFO][4765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" Namespace="calico-system" Pod="csi-node-driver-2sxmb" WorkloadEndpoint="ip--172--31--23--101-k8s-csi--node--driver--2sxmb-eth0" May 27 17:46:50.707411 containerd[2001]: time="2025-05-27T17:46:50.707354811Z" level=info msg="connecting to shim 006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1" address="unix:///run/containerd/s/977794b4ec27e7038ec0302d669fef1add737d5d503c9a9d993f62f476117112" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:50.744733 systemd[1]: Started cri-containerd-006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1.scope - libcontainer container 006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1. May 27 17:46:50.763552 kubelet[3458]: E0527 17:46:50.763065 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:46:50.801346 containerd[2001]: time="2025-05-27T17:46:50.801307503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2sxmb,Uid:b0f763d7-ff63-4f3f-93a2-96ef656a46c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1\"" May 27 17:46:50.802890 containerd[2001]: time="2025-05-27T17:46:50.802864016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:46:51.506023 containerd[2001]: time="2025-05-27T17:46:51.505724684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7cz9n,Uid:b0ef2a5f-84e9-4ac7-9d73-b40444c1d727,Namespace:kube-system,Attempt:0,}" May 27 17:46:51.507001 containerd[2001]: time="2025-05-27T17:46:51.506965123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-fl2mg,Uid:7d203b3c-bbe7-4091-9e00-dea858c46678,Namespace:calico-apiserver,Attempt:0,}" May 27 17:46:51.754363 systemd-networkd[1817]: cali6841083c6b2: Link UP May 27 17:46:51.758095 systemd-networkd[1817]: cali6841083c6b2: Gained carrier May 27 17:46:51.775400 containerd[2001]: 2025-05-27 17:46:51.591 [INFO][4864] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:51.775400 containerd[2001]: 2025-05-27 17:46:51.623 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0 coredns-674b8bbfcf- kube-system b0ef2a5f-84e9-4ac7-9d73-b40444c1d727 811 0 2025-05-27 17:46:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-101 coredns-674b8bbfcf-7cz9n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6841083c6b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-" May 27 17:46:51.775400 containerd[2001]: 2025-05-27 17:46:51.623 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" May 27 17:46:51.775400 containerd[2001]: 2025-05-27 17:46:51.680 [INFO][4882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" HandleID="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Workload="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.681 [INFO][4882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" HandleID="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Workload="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-101", "pod":"coredns-674b8bbfcf-7cz9n", "timestamp":"2025-05-27 17:46:51.680831956 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.681 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.681 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.681 [INFO][4882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.692 [INFO][4882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" host="ip-172-31-23-101" May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.698 [INFO][4882] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.706 [INFO][4882] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.709 [INFO][4882] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:51.775755 containerd[2001]: 2025-05-27 17:46:51.713 [INFO][4882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:51.776197 containerd[2001]: 2025-05-27 17:46:51.714 [INFO][4882] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" host="ip-172-31-23-101" May 27 17:46:51.776197 containerd[2001]: 2025-05-27 17:46:51.716 [INFO][4882] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52 May 27 17:46:51.776197 containerd[2001]: 2025-05-27 17:46:51.721 [INFO][4882] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" host="ip-172-31-23-101" May 27 17:46:51.776197 containerd[2001]: 2025-05-27 17:46:51.731 [INFO][4882] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.3/26] block=192.168.92.0/26 handle="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" host="ip-172-31-23-101" May 27 17:46:51.776197 containerd[2001]: 2025-05-27 17:46:51.732 [INFO][4882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.3/26] handle="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" host="ip-172-31-23-101" May 27 17:46:51.776197 containerd[2001]: 2025-05-27 17:46:51.732 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:51.776197 containerd[2001]: 2025-05-27 17:46:51.732 [INFO][4882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.3/26] IPv6=[] ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" HandleID="k8s-pod-network.7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Workload="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" May 27 17:46:51.776623 containerd[2001]: 2025-05-27 17:46:51.737 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b0ef2a5f-84e9-4ac7-9d73-b40444c1d727", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"coredns-674b8bbfcf-7cz9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6841083c6b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:51.776623 containerd[2001]: 2025-05-27 17:46:51.737 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.3/32] ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" May 27 17:46:51.776623 containerd[2001]: 2025-05-27 17:46:51.737 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6841083c6b2 ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" May 27 17:46:51.776623 containerd[2001]: 2025-05-27 17:46:51.758 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" May 27 17:46:51.776623 containerd[2001]: 2025-05-27 17:46:51.758 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b0ef2a5f-84e9-4ac7-9d73-b40444c1d727", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52", Pod:"coredns-674b8bbfcf-7cz9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6841083c6b2", MAC:"d2:65:7a:9f:93:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:51.776623 containerd[2001]: 2025-05-27 17:46:51.772 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" Namespace="kube-system" Pod="coredns-674b8bbfcf-7cz9n" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--7cz9n-eth0" May 27 17:46:51.821468 containerd[2001]: time="2025-05-27T17:46:51.821163720Z" level=info msg="connecting to shim 7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52" address="unix:///run/containerd/s/b0d412f74275874c2a560cff2e7c54896769dfb7572ab6ef22548d4b2bb82c65" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:51.864126 systemd[1]: Started cri-containerd-7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52.scope - libcontainer container 7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52. May 27 17:46:51.874553 systemd-networkd[1817]: cali4f79bac531f: Link UP May 27 17:46:51.878865 systemd-networkd[1817]: cali4f79bac531f: Gained carrier May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.622 [INFO][4857] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.649 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0 calico-apiserver-df9788ff4- calico-apiserver 7d203b3c-bbe7-4091-9e00-dea858c46678 819 0 2025-05-27 17:46:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:df9788ff4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-101 calico-apiserver-df9788ff4-fl2mg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f79bac531f [] [] }} ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.649 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.696 [INFO][4888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" HandleID="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Workload="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.697 [INFO][4888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" HandleID="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Workload="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d96f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-101", "pod":"calico-apiserver-df9788ff4-fl2mg", "timestamp":"2025-05-27 17:46:51.696903095 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.697 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.732 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.732 [INFO][4888] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.793 [INFO][4888] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.805 [INFO][4888] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.815 [INFO][4888] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.819 [INFO][4888] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.826 [INFO][4888] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.826 [INFO][4888] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.831 [INFO][4888] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417 May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.847 [INFO][4888] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.861 [INFO][4888] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.4/26] block=192.168.92.0/26 handle="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.861 [INFO][4888] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.4/26] handle="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" host="ip-172-31-23-101" May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.861 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:51.918320 containerd[2001]: 2025-05-27 17:46:51.861 [INFO][4888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.4/26] IPv6=[] ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" HandleID="k8s-pod-network.f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Workload="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" May 27 17:46:51.919886 containerd[2001]: 2025-05-27 17:46:51.867 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0", GenerateName:"calico-apiserver-df9788ff4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d203b3c-bbe7-4091-9e00-dea858c46678", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9788ff4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"calico-apiserver-df9788ff4-fl2mg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f79bac531f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:51.919886 containerd[2001]: 2025-05-27 17:46:51.868 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.4/32] ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" May 27 17:46:51.919886 containerd[2001]: 2025-05-27 17:46:51.868 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f79bac531f ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" May 27 17:46:51.919886 containerd[2001]: 2025-05-27 17:46:51.884 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" May 27 17:46:51.919886 containerd[2001]: 2025-05-27 17:46:51.887 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0", GenerateName:"calico-apiserver-df9788ff4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d203b3c-bbe7-4091-9e00-dea858c46678", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9788ff4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417", Pod:"calico-apiserver-df9788ff4-fl2mg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f79bac531f", MAC:"9a:bb:ed:11:09:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:51.919886 containerd[2001]: 2025-05-27 17:46:51.912 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-fl2mg" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--fl2mg-eth0" May 27 17:46:51.965302 containerd[2001]: time="2025-05-27T17:46:51.965126102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7cz9n,Uid:b0ef2a5f-84e9-4ac7-9d73-b40444c1d727,Namespace:kube-system,Attempt:0,} returns sandbox id \"7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52\"" May 27 17:46:51.971523 containerd[2001]: time="2025-05-27T17:46:51.971249987Z" level=info msg="connecting to shim f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417" address="unix:///run/containerd/s/f893afdaef8c5aab1209ee71f5c7ae6a334d7544be04191338339bc77b16f56a" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:51.976096 containerd[2001]: time="2025-05-27T17:46:51.976056780Z" level=info msg="CreateContainer within sandbox \"7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:46:52.016701 systemd[1]: Started cri-containerd-f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417.scope - libcontainer container f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417. May 27 17:46:52.029857 containerd[2001]: time="2025-05-27T17:46:52.029805381Z" level=info msg="Container 02a6783be8171a0d4729c384187aec14cebd7406df3e564799c1475194f47d89: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:52.053578 containerd[2001]: time="2025-05-27T17:46:52.053412716Z" level=info msg="CreateContainer within sandbox \"7364b33978fc98460e639de9e6111a7e2915139d95f650ca89b612c449227f52\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02a6783be8171a0d4729c384187aec14cebd7406df3e564799c1475194f47d89\"" May 27 17:46:52.054361 containerd[2001]: time="2025-05-27T17:46:52.054192248Z" level=info msg="StartContainer for \"02a6783be8171a0d4729c384187aec14cebd7406df3e564799c1475194f47d89\"" May 27 17:46:52.056496 containerd[2001]: time="2025-05-27T17:46:52.056159430Z" level=info msg="connecting to shim 02a6783be8171a0d4729c384187aec14cebd7406df3e564799c1475194f47d89" address="unix:///run/containerd/s/b0d412f74275874c2a560cff2e7c54896769dfb7572ab6ef22548d4b2bb82c65" protocol=ttrpc version=3 May 27 17:46:52.086783 systemd[1]: Started cri-containerd-02a6783be8171a0d4729c384187aec14cebd7406df3e564799c1475194f47d89.scope - libcontainer container 02a6783be8171a0d4729c384187aec14cebd7406df3e564799c1475194f47d89. May 27 17:46:52.124748 containerd[2001]: time="2025-05-27T17:46:52.124707821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-fl2mg,Uid:7d203b3c-bbe7-4091-9e00-dea858c46678,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417\"" May 27 17:46:52.167713 containerd[2001]: time="2025-05-27T17:46:52.167665303Z" level=info msg="StartContainer for \"02a6783be8171a0d4729c384187aec14cebd7406df3e564799c1475194f47d89\" returns successfully" May 27 17:46:52.370639 systemd-networkd[1817]: calib5a12721545: Gained IPv6LL May 27 17:46:52.456014 containerd[2001]: time="2025-05-27T17:46:52.455950708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:52.457184 containerd[2001]: time="2025-05-27T17:46:52.457129646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 17:46:52.460142 containerd[2001]: time="2025-05-27T17:46:52.460069654Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:52.465049 containerd[2001]: time="2025-05-27T17:46:52.464858823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:52.467176 containerd[2001]: time="2025-05-27T17:46:52.466976151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.663921794s" May 27 17:46:52.467555 containerd[2001]: time="2025-05-27T17:46:52.467155320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 17:46:52.471105 containerd[2001]: time="2025-05-27T17:46:52.471037343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:46:52.479637 containerd[2001]: time="2025-05-27T17:46:52.479565559Z" level=info msg="CreateContainer within sandbox \"006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:46:52.502478 containerd[2001]: time="2025-05-27T17:46:52.501599899Z" level=info msg="Container 4370f5457be1f928b430bc818e30a735777bf18ff38a970295ea79b4b8206f92: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:52.506837 containerd[2001]: time="2025-05-27T17:46:52.506795655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ph2ft,Uid:0f4dab51-bbfc-4262-8440-a1bb42ffd9e6,Namespace:calico-system,Attempt:0,}" May 27 17:46:52.558005 containerd[2001]: time="2025-05-27T17:46:52.557805336Z" level=info msg="CreateContainer within sandbox \"006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4370f5457be1f928b430bc818e30a735777bf18ff38a970295ea79b4b8206f92\"" May 27 17:46:52.561211 containerd[2001]: time="2025-05-27T17:46:52.561062899Z" level=info msg="StartContainer for \"4370f5457be1f928b430bc818e30a735777bf18ff38a970295ea79b4b8206f92\"" May 27 17:46:52.570202 containerd[2001]: time="2025-05-27T17:46:52.570068620Z" level=info msg="connecting to shim 4370f5457be1f928b430bc818e30a735777bf18ff38a970295ea79b4b8206f92" address="unix:///run/containerd/s/977794b4ec27e7038ec0302d669fef1add737d5d503c9a9d993f62f476117112" protocol=ttrpc version=3 May 27 17:46:52.643889 systemd[1]: Started cri-containerd-4370f5457be1f928b430bc818e30a735777bf18ff38a970295ea79b4b8206f92.scope - libcontainer container 4370f5457be1f928b430bc818e30a735777bf18ff38a970295ea79b4b8206f92. May 27 17:46:52.818664 systemd-networkd[1817]: cali6841083c6b2: Gained IPv6LL May 27 17:46:52.894966 systemd-networkd[1817]: calidad44780271: Link UP May 27 17:46:52.898336 systemd-networkd[1817]: calidad44780271: Gained carrier May 27 17:46:52.926627 kubelet[3458]: I0527 17:46:52.925260 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7cz9n" podStartSLOduration=38.925038654 podStartE2EDuration="38.925038654s" podCreationTimestamp="2025-05-27 17:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:52.849026492 +0000 UTC m=+45.525393634" watchObservedRunningTime="2025-05-27 17:46:52.925038654 +0000 UTC m=+45.601405795" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.634 [INFO][5037] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.666 [INFO][5037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0 goldmane-78d55f7ddc- calico-system 0f4dab51-bbfc-4262-8440-a1bb42ffd9e6 815 0 2025-05-27 17:46:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-101 goldmane-78d55f7ddc-ph2ft eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidad44780271 [] [] }} ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.667 [INFO][5037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.750 [INFO][5074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" HandleID="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Workload="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.750 [INFO][5074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" HandleID="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Workload="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-101", "pod":"goldmane-78d55f7ddc-ph2ft", "timestamp":"2025-05-27 17:46:52.750192577 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.751 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.751 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.752 [INFO][5074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.765 [INFO][5074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.777 [INFO][5074] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.803 [INFO][5074] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.808 [INFO][5074] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.822 [INFO][5074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.823 [INFO][5074] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.831 [INFO][5074] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.855 [INFO][5074] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.870 [INFO][5074] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.5/26] block=192.168.92.0/26 handle="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.870 [INFO][5074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.5/26] handle="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" host="ip-172-31-23-101" May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.870 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:52.933658 containerd[2001]: 2025-05-27 17:46:52.870 [INFO][5074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.5/26] IPv6=[] ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" HandleID="k8s-pod-network.e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Workload="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" May 27 17:46:52.940369 containerd[2001]: 2025-05-27 17:46:52.879 [INFO][5037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"0f4dab51-bbfc-4262-8440-a1bb42ffd9e6", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"goldmane-78d55f7ddc-ph2ft", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidad44780271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:52.940369 containerd[2001]: 2025-05-27 17:46:52.881 [INFO][5037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.5/32] ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" May 27 17:46:52.940369 containerd[2001]: 2025-05-27 17:46:52.881 [INFO][5037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidad44780271 ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" May 27 17:46:52.940369 containerd[2001]: 2025-05-27 17:46:52.898 [INFO][5037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" May 27 17:46:52.940369 containerd[2001]: 2025-05-27 17:46:52.898 [INFO][5037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"0f4dab51-bbfc-4262-8440-a1bb42ffd9e6", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f", Pod:"goldmane-78d55f7ddc-ph2ft", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidad44780271", MAC:"3a:81:d7:af:86:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:52.940369 containerd[2001]: 2025-05-27 17:46:52.924 [INFO][5037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ph2ft" WorkloadEndpoint="ip--172--31--23--101-k8s-goldmane--78d55f7ddc--ph2ft-eth0" May 27 17:46:52.975494 containerd[2001]: time="2025-05-27T17:46:52.975436613Z" level=info msg="StartContainer for \"4370f5457be1f928b430bc818e30a735777bf18ff38a970295ea79b4b8206f92\" returns successfully" May 27 17:46:53.001181 containerd[2001]: time="2025-05-27T17:46:53.000786645Z" level=info msg="connecting to shim e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f" address="unix:///run/containerd/s/8ba2b214389c8d9555f4aa852d19b44c9755b9e9860cbcaa3974150b57785ffd" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:53.013539 systemd-networkd[1817]: cali4f79bac531f: Gained IPv6LL May 27 17:46:53.075874 systemd[1]: Started cri-containerd-e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f.scope - libcontainer container e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f. May 27 17:46:53.166023 containerd[2001]: time="2025-05-27T17:46:53.165846357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ph2ft,Uid:0f4dab51-bbfc-4262-8440-a1bb42ffd9e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"e654a834ffd3101fee6838b8031495c5e73450bdb53a174c881daaed9db7337f\"" May 27 17:46:53.503024 containerd[2001]: time="2025-05-27T17:46:53.502820908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9c5fbd7-ml6ks,Uid:3b767e35-d8b4-4e3e-b073-420a388fcfe0,Namespace:calico-system,Attempt:0,}" May 27 17:46:53.633526 systemd-networkd[1817]: caliefe713cb38f: Link UP May 27 17:46:53.634706 systemd-networkd[1817]: caliefe713cb38f: Gained carrier May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.537 [INFO][5159] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.549 [INFO][5159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0 calico-kube-controllers-7bd9c5fbd7- calico-system 3b767e35-d8b4-4e3e-b073-420a388fcfe0 820 0 2025-05-27 17:46:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bd9c5fbd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-101 calico-kube-controllers-7bd9c5fbd7-ml6ks eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliefe713cb38f [] [] }} ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.550 [INFO][5159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.582 [INFO][5172] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" HandleID="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Workload="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.582 [INFO][5172] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" HandleID="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Workload="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235310), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-101", "pod":"calico-kube-controllers-7bd9c5fbd7-ml6ks", "timestamp":"2025-05-27 17:46:53.582679976 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.582 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.583 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.583 [INFO][5172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.591 [INFO][5172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.597 [INFO][5172] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.604 [INFO][5172] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.607 [INFO][5172] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.610 [INFO][5172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.610 [INFO][5172] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.611 [INFO][5172] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949 May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.620 [INFO][5172] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.628 [INFO][5172] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.6/26] block=192.168.92.0/26 handle="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.628 [INFO][5172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.6/26] handle="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" host="ip-172-31-23-101" May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.628 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:53.650034 containerd[2001]: 2025-05-27 17:46:53.628 [INFO][5172] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.6/26] IPv6=[] ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" HandleID="k8s-pod-network.8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Workload="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" May 27 17:46:53.650976 containerd[2001]: 2025-05-27 17:46:53.631 [INFO][5159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0", GenerateName:"calico-kube-controllers-7bd9c5fbd7-", Namespace:"calico-system", SelfLink:"", UID:"3b767e35-d8b4-4e3e-b073-420a388fcfe0", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd9c5fbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"calico-kube-controllers-7bd9c5fbd7-ml6ks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefe713cb38f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:53.650976 containerd[2001]: 2025-05-27 17:46:53.631 [INFO][5159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.6/32] ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" May 27 17:46:53.650976 containerd[2001]: 2025-05-27 17:46:53.631 [INFO][5159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefe713cb38f ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" May 27 17:46:53.650976 containerd[2001]: 2025-05-27 17:46:53.634 [INFO][5159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" May 27 17:46:53.650976 containerd[2001]: 2025-05-27 17:46:53.634 [INFO][5159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0", GenerateName:"calico-kube-controllers-7bd9c5fbd7-", Namespace:"calico-system", SelfLink:"", UID:"3b767e35-d8b4-4e3e-b073-420a388fcfe0", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd9c5fbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949", Pod:"calico-kube-controllers-7bd9c5fbd7-ml6ks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefe713cb38f", MAC:"ee:93:4f:09:84:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:53.650976 containerd[2001]: 2025-05-27 17:46:53.646 [INFO][5159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" Namespace="calico-system" Pod="calico-kube-controllers-7bd9c5fbd7-ml6ks" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--kube--controllers--7bd9c5fbd7--ml6ks-eth0" May 27 17:46:53.686099 containerd[2001]: time="2025-05-27T17:46:53.685583116Z" level=info msg="connecting to shim 8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949" address="unix:///run/containerd/s/9058b63d6fad359c049189463f70f4d2d1670578f6d540b59a9cc34e9a3cb529" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:53.716687 systemd[1]: Started cri-containerd-8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949.scope - libcontainer container 8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949. May 27 17:46:53.779827 containerd[2001]: time="2025-05-27T17:46:53.779731200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd9c5fbd7-ml6ks,Uid:3b767e35-d8b4-4e3e-b073-420a388fcfe0,Namespace:calico-system,Attempt:0,} returns sandbox id \"8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949\"" May 27 17:46:54.502882 containerd[2001]: time="2025-05-27T17:46:54.502837383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-5xfh7,Uid:7f88ba61-a3fb-420e-a5cb-af0c342ffeab,Namespace:calico-apiserver,Attempt:0,}" May 27 17:46:54.715119 systemd-networkd[1817]: calie45934163a1: Link UP May 27 17:46:54.715733 systemd-networkd[1817]: calie45934163a1: Gained carrier May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.554 [INFO][5252] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.572 [INFO][5252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0 calico-apiserver-df9788ff4- calico-apiserver 7f88ba61-a3fb-420e-a5cb-af0c342ffeab 818 0 2025-05-27 17:46:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:df9788ff4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-101 calico-apiserver-df9788ff4-5xfh7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie45934163a1 [] [] }} ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.573 [INFO][5252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.623 [INFO][5269] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" HandleID="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Workload="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.624 [INFO][5269] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" HandleID="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Workload="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-101", "pod":"calico-apiserver-df9788ff4-5xfh7", "timestamp":"2025-05-27 17:46:54.623484242 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.624 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.624 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.624 [INFO][5269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.636 [INFO][5269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.647 [INFO][5269] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.657 [INFO][5269] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.662 [INFO][5269] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.666 [INFO][5269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.667 [INFO][5269] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.672 [INFO][5269] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7 May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.687 [INFO][5269] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.703 [INFO][5269] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.7/26] block=192.168.92.0/26 handle="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.703 [INFO][5269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.7/26] handle="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" host="ip-172-31-23-101" May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.703 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:54.743885 containerd[2001]: 2025-05-27 17:46:54.703 [INFO][5269] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.7/26] IPv6=[] ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" HandleID="k8s-pod-network.b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Workload="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" May 27 17:46:54.746388 containerd[2001]: 2025-05-27 17:46:54.710 [INFO][5252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0", GenerateName:"calico-apiserver-df9788ff4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f88ba61-a3fb-420e-a5cb-af0c342ffeab", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9788ff4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"calico-apiserver-df9788ff4-5xfh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie45934163a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:54.746388 containerd[2001]: 2025-05-27 17:46:54.710 [INFO][5252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.7/32] ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" May 27 17:46:54.746388 containerd[2001]: 2025-05-27 17:46:54.710 [INFO][5252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie45934163a1 ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" May 27 17:46:54.746388 containerd[2001]: 2025-05-27 17:46:54.714 [INFO][5252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" May 27 17:46:54.746388 containerd[2001]: 2025-05-27 17:46:54.714 [INFO][5252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0", GenerateName:"calico-apiserver-df9788ff4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f88ba61-a3fb-420e-a5cb-af0c342ffeab", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"df9788ff4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7", Pod:"calico-apiserver-df9788ff4-5xfh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie45934163a1", MAC:"92:df:1e:19:ab:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:54.746388 containerd[2001]: 2025-05-27 17:46:54.737 [INFO][5252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" Namespace="calico-apiserver" Pod="calico-apiserver-df9788ff4-5xfh7" WorkloadEndpoint="ip--172--31--23--101-k8s-calico--apiserver--df9788ff4--5xfh7-eth0" May 27 17:46:54.791582 containerd[2001]: time="2025-05-27T17:46:54.791531977Z" level=info msg="connecting to shim b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7" address="unix:///run/containerd/s/4fa6fcb917130563e75f12d8971284e7a61d4656bcbb19372f6cae2dca88d381" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:54.834963 systemd[1]: Started cri-containerd-b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7.scope - libcontainer container b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7. May 27 17:46:54.931653 systemd-networkd[1817]: calidad44780271: Gained IPv6LL May 27 17:46:54.940273 containerd[2001]: time="2025-05-27T17:46:54.940230738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-df9788ff4-5xfh7,Uid:7f88ba61-a3fb-420e-a5cb-af0c342ffeab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7\"" May 27 17:46:55.059167 systemd-networkd[1817]: caliefe713cb38f: Gained IPv6LL May 27 17:46:55.508666 containerd[2001]: time="2025-05-27T17:46:55.508545463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zdfdl,Uid:65179e5b-6608-4d3e-a0c5-74ddb29ac692,Namespace:kube-system,Attempt:0,}" May 27 17:46:55.600027 kubelet[3458]: I0527 17:46:55.599982 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:46:55.881634 systemd-networkd[1817]: cali293b28982e1: Link UP May 27 17:46:55.882055 systemd-networkd[1817]: cali293b28982e1: Gained carrier May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.629 [INFO][5349] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.664 [INFO][5349] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0 coredns-674b8bbfcf- kube-system 65179e5b-6608-4d3e-a0c5-74ddb29ac692 817 0 2025-05-27 17:46:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-101 coredns-674b8bbfcf-zdfdl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali293b28982e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.665 [INFO][5349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.756 [INFO][5366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" HandleID="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Workload="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.757 [INFO][5366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" HandleID="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Workload="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9900), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-101", "pod":"coredns-674b8bbfcf-zdfdl", "timestamp":"2025-05-27 17:46:55.75683723 +0000 UTC"}, Hostname:"ip-172-31-23-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.757 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.757 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.757 [INFO][5366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-101' May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.777 [INFO][5366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.788 [INFO][5366] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.804 [INFO][5366] ipam/ipam.go 511: Trying affinity for 192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.808 [INFO][5366] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.814 [INFO][5366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.814 [INFO][5366] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.817 [INFO][5366] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355 May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.829 [INFO][5366] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.847 [INFO][5366] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.8/26] block=192.168.92.0/26 handle="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.849 [INFO][5366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.8/26] handle="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" host="ip-172-31-23-101" May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.850 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:46:55.940509 containerd[2001]: 2025-05-27 17:46:55.850 [INFO][5366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.8/26] IPv6=[] ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" HandleID="k8s-pod-network.c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Workload="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" May 27 17:46:55.944040 containerd[2001]: 2025-05-27 17:46:55.861 [INFO][5349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"65179e5b-6608-4d3e-a0c5-74ddb29ac692", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"", Pod:"coredns-674b8bbfcf-zdfdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali293b28982e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:55.944040 containerd[2001]: 2025-05-27 17:46:55.861 [INFO][5349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.8/32] ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" May 27 17:46:55.944040 containerd[2001]: 2025-05-27 17:46:55.861 [INFO][5349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali293b28982e1 ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" May 27 17:46:55.944040 containerd[2001]: 2025-05-27 17:46:55.882 [INFO][5349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" May 27 17:46:55.944040 containerd[2001]: 2025-05-27 17:46:55.884 [INFO][5349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"65179e5b-6608-4d3e-a0c5-74ddb29ac692", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 46, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-101", ContainerID:"c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355", Pod:"coredns-674b8bbfcf-zdfdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali293b28982e1", MAC:"a6:d8:46:63:1e:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:46:55.944040 containerd[2001]: 2025-05-27 17:46:55.915 [INFO][5349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" Namespace="kube-system" Pod="coredns-674b8bbfcf-zdfdl" WorkloadEndpoint="ip--172--31--23--101-k8s-coredns--674b8bbfcf--zdfdl-eth0" May 27 17:46:56.049043 containerd[2001]: time="2025-05-27T17:46:56.048987776Z" level=info msg="connecting to shim c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355" address="unix:///run/containerd/s/b2ea0955b20f74ddcd16ec68df4eb2eb90f9e8f49d17ad515c58707db04016a7" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:56.159802 systemd[1]: Started cri-containerd-c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355.scope - libcontainer container c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355. May 27 17:46:56.390241 containerd[2001]: time="2025-05-27T17:46:56.390198580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zdfdl,Uid:65179e5b-6608-4d3e-a0c5-74ddb29ac692,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355\"" May 27 17:46:56.407902 containerd[2001]: time="2025-05-27T17:46:56.407326076Z" level=info msg="CreateContainer within sandbox \"c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:46:56.449996 containerd[2001]: time="2025-05-27T17:46:56.449865333Z" level=info msg="Container f285cd45fcf7284cc2ca0cb1dd1e6dec04d7f2f268e9d1a2964eb001e0c25b78: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:56.452571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808405806.mount: Deactivated successfully. May 27 17:46:56.481755 containerd[2001]: time="2025-05-27T17:46:56.481594915Z" level=info msg="CreateContainer within sandbox \"c2ee9a8b0505a3c30152afbebd69f481bdfa14332247fa0318b79599c29b2355\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f285cd45fcf7284cc2ca0cb1dd1e6dec04d7f2f268e9d1a2964eb001e0c25b78\"" May 27 17:46:56.483390 containerd[2001]: time="2025-05-27T17:46:56.483275162Z" level=info msg="StartContainer for \"f285cd45fcf7284cc2ca0cb1dd1e6dec04d7f2f268e9d1a2964eb001e0c25b78\"" May 27 17:46:56.486422 containerd[2001]: time="2025-05-27T17:46:56.486382335Z" level=info msg="connecting to shim f285cd45fcf7284cc2ca0cb1dd1e6dec04d7f2f268e9d1a2964eb001e0c25b78" address="unix:///run/containerd/s/b2ea0955b20f74ddcd16ec68df4eb2eb90f9e8f49d17ad515c58707db04016a7" protocol=ttrpc version=3 May 27 17:46:56.551676 systemd[1]: Started cri-containerd-f285cd45fcf7284cc2ca0cb1dd1e6dec04d7f2f268e9d1a2964eb001e0c25b78.scope - libcontainer container f285cd45fcf7284cc2ca0cb1dd1e6dec04d7f2f268e9d1a2964eb001e0c25b78. May 27 17:46:56.652105 containerd[2001]: time="2025-05-27T17:46:56.651607639Z" level=info msg="StartContainer for \"f285cd45fcf7284cc2ca0cb1dd1e6dec04d7f2f268e9d1a2964eb001e0c25b78\" returns successfully" May 27 17:46:56.722638 systemd-networkd[1817]: calie45934163a1: Gained IPv6LL May 27 17:46:56.908814 kubelet[3458]: I0527 17:46:56.908698 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zdfdl" podStartSLOduration=42.908651818 podStartE2EDuration="42.908651818s" podCreationTimestamp="2025-05-27 17:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:56.88518333 +0000 UTC m=+49.561550491" watchObservedRunningTime="2025-05-27 17:46:56.908651818 +0000 UTC m=+49.585018962" May 27 17:46:56.943514 containerd[2001]: time="2025-05-27T17:46:56.943121521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:56.945369 containerd[2001]: time="2025-05-27T17:46:56.945323931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 17:46:56.947943 containerd[2001]: time="2025-05-27T17:46:56.947902854Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:56.953454 containerd[2001]: time="2025-05-27T17:46:56.952792029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:56.953836 containerd[2001]: time="2025-05-27T17:46:56.953792602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 4.482704573s" May 27 17:46:56.953931 containerd[2001]: time="2025-05-27T17:46:56.953844355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:46:56.957563 containerd[2001]: time="2025-05-27T17:46:56.957509236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:46:56.965090 containerd[2001]: time="2025-05-27T17:46:56.964016375Z" level=info msg="CreateContainer within sandbox \"f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:46:56.977826 containerd[2001]: time="2025-05-27T17:46:56.977717197Z" level=info msg="Container b47ea145873a3f0f76d35e7e510c6cf4a2437b07526f34e6f0e7c1652fe3fa21: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:56.993331 containerd[2001]: time="2025-05-27T17:46:56.991953158Z" level=info msg="CreateContainer within sandbox \"f9f21ebbaec1e02716fca1cbcf6b35a79ebcccfd12e571a8004c589292a9c417\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b47ea145873a3f0f76d35e7e510c6cf4a2437b07526f34e6f0e7c1652fe3fa21\"" May 27 17:46:56.993331 containerd[2001]: time="2025-05-27T17:46:56.992856744Z" level=info msg="StartContainer for \"b47ea145873a3f0f76d35e7e510c6cf4a2437b07526f34e6f0e7c1652fe3fa21\"" May 27 17:46:56.994521 containerd[2001]: time="2025-05-27T17:46:56.994484481Z" level=info msg="connecting to shim b47ea145873a3f0f76d35e7e510c6cf4a2437b07526f34e6f0e7c1652fe3fa21" address="unix:///run/containerd/s/f893afdaef8c5aab1209ee71f5c7ae6a334d7544be04191338339bc77b16f56a" protocol=ttrpc version=3 May 27 17:46:57.054967 systemd[1]: Started cri-containerd-b47ea145873a3f0f76d35e7e510c6cf4a2437b07526f34e6f0e7c1652fe3fa21.scope - libcontainer container b47ea145873a3f0f76d35e7e510c6cf4a2437b07526f34e6f0e7c1652fe3fa21. May 27 17:46:57.148662 containerd[2001]: time="2025-05-27T17:46:57.148597652Z" level=info msg="StartContainer for \"b47ea145873a3f0f76d35e7e510c6cf4a2437b07526f34e6f0e7c1652fe3fa21\" returns successfully" May 27 17:46:57.426602 systemd-networkd[1817]: cali293b28982e1: Gained IPv6LL May 27 17:46:57.895239 kubelet[3458]: I0527 17:46:57.895099 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-df9788ff4-fl2mg" podStartSLOduration=29.066880788 podStartE2EDuration="33.895060381s" podCreationTimestamp="2025-05-27 17:46:24 +0000 UTC" firstStartedPulling="2025-05-27 17:46:52.127716784 +0000 UTC m=+44.804083919" lastFinishedPulling="2025-05-27 17:46:56.955896375 +0000 UTC m=+49.632263512" observedRunningTime="2025-05-27 17:46:57.893840337 +0000 UTC m=+50.570207479" watchObservedRunningTime="2025-05-27 17:46:57.895060381 +0000 UTC m=+50.571427521" May 27 17:46:57.902205 systemd-networkd[1817]: vxlan.calico: Link UP May 27 17:46:57.902254 systemd-networkd[1817]: vxlan.calico: Gained carrier May 27 17:46:57.966853 (udev-worker)[4495]: Network interface NamePolicy= disabled on kernel command line. May 27 17:46:58.850263 kubelet[3458]: I0527 17:46:58.850212 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:46:58.964005 systemd-networkd[1817]: vxlan.calico: Gained IPv6LL May 27 17:46:59.648680 containerd[2001]: time="2025-05-27T17:46:59.648535165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:59.650421 containerd[2001]: time="2025-05-27T17:46:59.650379066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 17:46:59.651248 containerd[2001]: time="2025-05-27T17:46:59.651194477Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:59.655448 containerd[2001]: time="2025-05-27T17:46:59.655072061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:59.656501 containerd[2001]: time="2025-05-27T17:46:59.656465102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.698916803s" May 27 17:46:59.656656 containerd[2001]: time="2025-05-27T17:46:59.656636109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 17:46:59.660024 containerd[2001]: time="2025-05-27T17:46:59.659993830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:46:59.665129 containerd[2001]: time="2025-05-27T17:46:59.665016870Z" level=info msg="CreateContainer within sandbox \"006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:46:59.682698 containerd[2001]: time="2025-05-27T17:46:59.681585737Z" level=info msg="Container fbd4e2d1e78f21f89c35897e38dbfe8a481d2f1ca7627ab6a8c93258aba684b5: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:59.697241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599153778.mount: Deactivated successfully. May 27 17:46:59.753839 containerd[2001]: time="2025-05-27T17:46:59.753782697Z" level=info msg="CreateContainer within sandbox \"006db83646636edd759cc7919385413d1fa843dc07f3d0c250707c9051ff55a1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fbd4e2d1e78f21f89c35897e38dbfe8a481d2f1ca7627ab6a8c93258aba684b5\"" May 27 17:46:59.755136 containerd[2001]: time="2025-05-27T17:46:59.755106335Z" level=info msg="StartContainer for \"fbd4e2d1e78f21f89c35897e38dbfe8a481d2f1ca7627ab6a8c93258aba684b5\"" May 27 17:46:59.759300 containerd[2001]: time="2025-05-27T17:46:59.759237684Z" level=info msg="connecting to shim fbd4e2d1e78f21f89c35897e38dbfe8a481d2f1ca7627ab6a8c93258aba684b5" address="unix:///run/containerd/s/977794b4ec27e7038ec0302d669fef1add737d5d503c9a9d993f62f476117112" protocol=ttrpc version=3 May 27 17:46:59.806736 systemd[1]: Started cri-containerd-fbd4e2d1e78f21f89c35897e38dbfe8a481d2f1ca7627ab6a8c93258aba684b5.scope - libcontainer container fbd4e2d1e78f21f89c35897e38dbfe8a481d2f1ca7627ab6a8c93258aba684b5. May 27 17:46:59.903844 containerd[2001]: time="2025-05-27T17:46:59.902856781Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:46:59.907279 containerd[2001]: time="2025-05-27T17:46:59.907231364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:46:59.907640 containerd[2001]: time="2025-05-27T17:46:59.907615553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:46:59.922308 kubelet[3458]: E0527 17:46:59.922248 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:46:59.924838 kubelet[3458]: E0527 17:46:59.922328 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:46:59.924838 kubelet[3458]: E0527 17:46:59.922756 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwgp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-ph2ft_calico-system(0f4dab51-bbfc-4262-8440-a1bb42ffd9e6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:46:59.925898 containerd[2001]: time="2025-05-27T17:46:59.925287526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:46:59.926006 kubelet[3458]: E0527 17:46:59.925379 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:46:59.949328 containerd[2001]: time="2025-05-27T17:46:59.949056976Z" level=info msg="StartContainer for \"fbd4e2d1e78f21f89c35897e38dbfe8a481d2f1ca7627ab6a8c93258aba684b5\" returns successfully" May 27 17:47:00.867619 kubelet[3458]: E0527 17:47:00.867509 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:47:00.912836 kubelet[3458]: I0527 17:47:00.912762 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2sxmb" podStartSLOduration=23.057291158 podStartE2EDuration="31.912738459s" podCreationTimestamp="2025-05-27 17:46:29 +0000 UTC" firstStartedPulling="2025-05-27 17:46:50.80266543 +0000 UTC m=+43.479032548" lastFinishedPulling="2025-05-27 17:46:59.658112715 +0000 UTC m=+52.334479849" observedRunningTime="2025-05-27 17:47:00.885621864 +0000 UTC m=+53.561989022" watchObservedRunningTime="2025-05-27 17:47:00.912738459 +0000 UTC m=+53.589105600" May 27 17:47:00.956193 kubelet[3458]: I0527 17:47:00.956126 3458 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:47:00.971563 kubelet[3458]: I0527 17:47:00.971491 3458 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:47:01.187802 ntpd[1974]: Listen normally on 7 vxlan.calico 192.168.92.0:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 7 vxlan.calico 192.168.92.0:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 8 calieefe9eee4a6 [fe80::ecee:eeff:feee:eeee%4]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 9 calib5a12721545 [fe80::ecee:eeff:feee:eeee%5]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 10 cali6841083c6b2 [fe80::ecee:eeff:feee:eeee%6]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 11 cali4f79bac531f [fe80::ecee:eeff:feee:eeee%7]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 12 calidad44780271 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 13 caliefe713cb38f [fe80::ecee:eeff:feee:eeee%9]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 14 calie45934163a1 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 15 cali293b28982e1 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 17:47:01.201868 ntpd[1974]: 27 May 17:47:01 ntpd[1974]: Listen normally on 16 vxlan.calico [fe80::6494:4bff:fea0:ecc2%12]:123 May 27 17:47:01.187901 ntpd[1974]: Listen normally on 8 calieefe9eee4a6 [fe80::ecee:eeff:feee:eeee%4]:123 May 27 17:47:01.187958 ntpd[1974]: Listen normally on 9 calib5a12721545 [fe80::ecee:eeff:feee:eeee%5]:123 May 27 17:47:01.187998 ntpd[1974]: Listen normally on 10 cali6841083c6b2 [fe80::ecee:eeff:feee:eeee%6]:123 May 27 17:47:01.188035 ntpd[1974]: Listen normally on 11 cali4f79bac531f [fe80::ecee:eeff:feee:eeee%7]:123 May 27 17:47:01.188070 ntpd[1974]: Listen normally on 12 calidad44780271 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 17:47:01.188106 ntpd[1974]: Listen normally on 13 caliefe713cb38f [fe80::ecee:eeff:feee:eeee%9]:123 May 27 17:47:01.191291 ntpd[1974]: Listen normally on 14 calie45934163a1 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 17:47:01.193870 ntpd[1974]: Listen normally on 15 cali293b28982e1 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 17:47:01.193925 ntpd[1974]: Listen normally on 16 vxlan.calico [fe80::6494:4bff:fea0:ecc2%12]:123 May 27 17:47:03.116313 kubelet[3458]: I0527 17:47:03.115772 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:47:07.073592 containerd[2001]: time="2025-05-27T17:47:07.073524175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:07.075819 containerd[2001]: time="2025-05-27T17:47:07.075777530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 17:47:07.076835 containerd[2001]: time="2025-05-27T17:47:07.076784979Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:07.079001 containerd[2001]: time="2025-05-27T17:47:07.078952685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:07.079515 containerd[2001]: time="2025-05-27T17:47:07.079479989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 7.153320848s" May 27 17:47:07.079903 containerd[2001]: time="2025-05-27T17:47:07.079513121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 17:47:07.081129 containerd[2001]: time="2025-05-27T17:47:07.081101050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:47:07.117669 systemd[1]: Started sshd@9-172.31.23.101:22-139.178.68.195:55608.service - OpenSSH per-connection server daemon (139.178.68.195:55608). May 27 17:47:07.211576 containerd[2001]: time="2025-05-27T17:47:07.211191892Z" level=info msg="CreateContainer within sandbox \"8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:47:07.257645 containerd[2001]: time="2025-05-27T17:47:07.256917412Z" level=info msg="Container c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:07.449177 sshd[5686]: Accepted publickey for core from 139.178.68.195 port 55608 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:07.459467 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:07.489817 systemd-logind[1979]: New session 10 of user core. May 27 17:47:07.496793 containerd[2001]: time="2025-05-27T17:47:07.494275290Z" level=info msg="CreateContainer within sandbox \"8838b8956ee3456026ed736182fd320104c1ce670624660a8fad9e9723700949\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5\"" May 27 17:47:07.495745 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:47:07.550558 containerd[2001]: time="2025-05-27T17:47:07.550507899Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:07.553153 containerd[2001]: time="2025-05-27T17:47:07.551742516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:47:07.561492 containerd[2001]: time="2025-05-27T17:47:07.561056303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 479.920323ms" May 27 17:47:07.561492 containerd[2001]: time="2025-05-27T17:47:07.561118409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:47:07.703755 containerd[2001]: time="2025-05-27T17:47:07.702998706Z" level=info msg="StartContainer for \"c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5\"" May 27 17:47:07.859734 containerd[2001]: time="2025-05-27T17:47:07.859336139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:47:07.873634 containerd[2001]: time="2025-05-27T17:47:07.873314065Z" level=info msg="CreateContainer within sandbox \"b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:47:07.910492 containerd[2001]: time="2025-05-27T17:47:07.902671883Z" level=info msg="Container 979baca50e253306ff642791e352ad10a53413ff5bd2db52a555349b91b4ec76: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:07.919948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856290333.mount: Deactivated successfully. May 27 17:47:07.957853 containerd[2001]: time="2025-05-27T17:47:07.957728384Z" level=info msg="connecting to shim c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5" address="unix:///run/containerd/s/9058b63d6fad359c049189463f70f4d2d1670578f6d540b59a9cc34e9a3cb529" protocol=ttrpc version=3 May 27 17:47:08.027144 containerd[2001]: time="2025-05-27T17:47:08.026380692Z" level=info msg="CreateContainer within sandbox \"b19720d821e3bceed5a9a7aa3562fcb3832f7aa8392196256639075108dbd2c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"979baca50e253306ff642791e352ad10a53413ff5bd2db52a555349b91b4ec76\"" May 27 17:47:08.036411 containerd[2001]: time="2025-05-27T17:47:08.035588847Z" level=info msg="StartContainer for \"979baca50e253306ff642791e352ad10a53413ff5bd2db52a555349b91b4ec76\"" May 27 17:47:08.080404 containerd[2001]: time="2025-05-27T17:47:08.080355178Z" level=info msg="connecting to shim 979baca50e253306ff642791e352ad10a53413ff5bd2db52a555349b91b4ec76" address="unix:///run/containerd/s/4fa6fcb917130563e75f12d8971284e7a61d4656bcbb19372f6cae2dca88d381" protocol=ttrpc version=3 May 27 17:47:08.118402 containerd[2001]: time="2025-05-27T17:47:08.116098663Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:47:08.128475 containerd[2001]: time="2025-05-27T17:47:08.128335366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:47:08.137620 containerd[2001]: time="2025-05-27T17:47:08.129298992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:47:08.138640 kubelet[3458]: E0527 17:47:08.138583 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:47:08.141737 kubelet[3458]: E0527 17:47:08.141547 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:47:08.170700 kubelet[3458]: E0527 17:47:08.169880 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c527f9838fbc47129edb353dd886be2b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:47:08.174629 containerd[2001]: time="2025-05-27T17:47:08.172160822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:47:08.274737 systemd[1]: Started cri-containerd-c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5.scope - libcontainer container c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5. May 27 17:47:08.313676 systemd[1]: Started cri-containerd-979baca50e253306ff642791e352ad10a53413ff5bd2db52a555349b91b4ec76.scope - libcontainer container 979baca50e253306ff642791e352ad10a53413ff5bd2db52a555349b91b4ec76. May 27 17:47:08.593720 containerd[2001]: time="2025-05-27T17:47:08.593656518Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:47:08.597322 containerd[2001]: time="2025-05-27T17:47:08.596231906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:47:08.597322 containerd[2001]: time="2025-05-27T17:47:08.597037484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:47:08.598194 kubelet[3458]: E0527 17:47:08.598142 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:47:08.598358 kubelet[3458]: E0527 17:47:08.598206 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:47:08.598449 kubelet[3458]: E0527 17:47:08.598349 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:47:08.601318 kubelet[3458]: E0527 17:47:08.600284 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:47:08.656076 containerd[2001]: time="2025-05-27T17:47:08.655027505Z" level=info msg="StartContainer for \"c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5\" returns successfully" May 27 17:47:08.658727 containerd[2001]: time="2025-05-27T17:47:08.658692773Z" level=info msg="StartContainer for \"979baca50e253306ff642791e352ad10a53413ff5bd2db52a555349b91b4ec76\" returns successfully" May 27 17:47:09.063043 sshd[5690]: Connection closed by 139.178.68.195 port 55608 May 27 17:47:09.064652 sshd-session[5686]: pam_unix(sshd:session): session closed for user core May 27 17:47:09.083461 systemd-logind[1979]: Session 10 logged out. Waiting for processes to exit. May 27 17:47:09.083628 systemd[1]: sshd@9-172.31.23.101:22-139.178.68.195:55608.service: Deactivated successfully. May 27 17:47:09.089687 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:47:09.095180 systemd-logind[1979]: Removed session 10. May 27 17:47:09.191916 kubelet[3458]: I0527 17:47:09.190445 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-df9788ff4-5xfh7" podStartSLOduration=32.347772619 podStartE2EDuration="45.190411286s" podCreationTimestamp="2025-05-27 17:46:24 +0000 UTC" firstStartedPulling="2025-05-27 17:46:54.9434694 +0000 UTC m=+47.619836533" lastFinishedPulling="2025-05-27 17:47:07.786108079 +0000 UTC m=+60.462475200" observedRunningTime="2025-05-27 17:47:09.166595369 +0000 UTC m=+61.842962510" watchObservedRunningTime="2025-05-27 17:47:09.190411286 +0000 UTC m=+61.866778426" May 27 17:47:09.384519 containerd[2001]: time="2025-05-27T17:47:09.384378201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5\" id:\"d7d222c5ba03b961ba293c71762897d690cabca8cff24bcf6f34dcd6e5200c6d\" pid:5803 exited_at:{seconds:1748368029 nanos:341262516}" May 27 17:47:09.409734 kubelet[3458]: I0527 17:47:09.409118 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bd9c5fbd7-ml6ks" podStartSLOduration=27.110501302 podStartE2EDuration="40.409099325s" podCreationTimestamp="2025-05-27 17:46:29 +0000 UTC" firstStartedPulling="2025-05-27 17:46:53.782222766 +0000 UTC m=+46.458589883" lastFinishedPulling="2025-05-27 17:47:07.080820775 +0000 UTC m=+59.757187906" observedRunningTime="2025-05-27 17:47:09.192263206 +0000 UTC m=+61.868630346" watchObservedRunningTime="2025-05-27 17:47:09.409099325 +0000 UTC m=+62.085466478" May 27 17:47:10.153978 kubelet[3458]: I0527 17:47:10.153829 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:47:13.569991 containerd[2001]: time="2025-05-27T17:47:13.569840154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:47:13.747690 containerd[2001]: time="2025-05-27T17:47:13.747642232Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:47:13.750573 containerd[2001]: time="2025-05-27T17:47:13.750518209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:47:13.750875 kubelet[3458]: E0527 17:47:13.750752 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:47:13.750875 kubelet[3458]: E0527 17:47:13.750796 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:47:13.760974 containerd[2001]: time="2025-05-27T17:47:13.750529467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:47:13.775197 kubelet[3458]: E0527 17:47:13.775114 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwgp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-ph2ft_calico-system(0f4dab51-bbfc-4262-8440-a1bb42ffd9e6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:47:13.785413 kubelet[3458]: E0527 17:47:13.785345 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:47:14.099511 systemd[1]: Started sshd@10-172.31.23.101:22-139.178.68.195:33132.service - OpenSSH per-connection server daemon (139.178.68.195:33132). May 27 17:47:14.391294 sshd[5816]: Accepted publickey for core from 139.178.68.195 port 33132 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:14.396574 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:14.403723 systemd-logind[1979]: New session 11 of user core. May 27 17:47:14.408679 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:47:15.246528 sshd[5818]: Connection closed by 139.178.68.195 port 33132 May 27 17:47:15.247664 sshd-session[5816]: pam_unix(sshd:session): session closed for user core May 27 17:47:15.251991 systemd-logind[1979]: Session 11 logged out. Waiting for processes to exit. May 27 17:47:15.252679 systemd[1]: sshd@10-172.31.23.101:22-139.178.68.195:33132.service: Deactivated successfully. May 27 17:47:15.255466 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:47:15.257979 systemd-logind[1979]: Removed session 11. May 27 17:47:15.282167 systemd[1]: Started sshd@11-172.31.23.101:22-139.178.68.195:33134.service - OpenSSH per-connection server daemon (139.178.68.195:33134). May 27 17:47:15.468523 sshd[5832]: Accepted publickey for core from 139.178.68.195 port 33134 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:15.469980 sshd-session[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:15.476826 systemd-logind[1979]: New session 12 of user core. May 27 17:47:15.483708 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:47:15.731920 sshd[5834]: Connection closed by 139.178.68.195 port 33134 May 27 17:47:15.732738 sshd-session[5832]: pam_unix(sshd:session): session closed for user core May 27 17:47:15.739200 systemd-logind[1979]: Session 12 logged out. Waiting for processes to exit. May 27 17:47:15.741000 systemd[1]: sshd@11-172.31.23.101:22-139.178.68.195:33134.service: Deactivated successfully. May 27 17:47:15.745354 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:47:15.749545 systemd-logind[1979]: Removed session 12. May 27 17:47:15.768516 systemd[1]: Started sshd@12-172.31.23.101:22-139.178.68.195:33142.service - OpenSSH per-connection server daemon (139.178.68.195:33142). May 27 17:47:15.948988 sshd[5844]: Accepted publickey for core from 139.178.68.195 port 33142 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:15.950494 sshd-session[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:15.957838 systemd-logind[1979]: New session 13 of user core. May 27 17:47:15.964675 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:47:16.156868 sshd[5848]: Connection closed by 139.178.68.195 port 33142 May 27 17:47:16.157652 sshd-session[5844]: pam_unix(sshd:session): session closed for user core May 27 17:47:16.165152 systemd-logind[1979]: Session 13 logged out. Waiting for processes to exit. May 27 17:47:16.166237 systemd[1]: sshd@12-172.31.23.101:22-139.178.68.195:33142.service: Deactivated successfully. May 27 17:47:16.168895 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:47:16.171305 systemd-logind[1979]: Removed session 13. May 27 17:47:16.984760 kubelet[3458]: I0527 17:47:16.984597 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:47:18.923082 containerd[2001]: time="2025-05-27T17:47:18.923028426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5\" id:\"c8fee7c7b8c016e4d8eae58cfac78fa0bcbc0405bd739a4326ead6760f883be7\" pid:5882 exited_at:{seconds:1748368038 nanos:921295163}" May 27 17:47:20.513215 kubelet[3458]: E0527 17:47:20.513105 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:47:21.189732 systemd[1]: Started sshd@13-172.31.23.101:22-139.178.68.195:33146.service - OpenSSH per-connection server daemon (139.178.68.195:33146). May 27 17:47:21.449711 sshd[5895]: Accepted publickey for core from 139.178.68.195 port 33146 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:21.452484 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:21.459047 systemd-logind[1979]: New session 14 of user core. May 27 17:47:21.465686 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:47:21.734028 sshd[5897]: Connection closed by 139.178.68.195 port 33146 May 27 17:47:21.734680 sshd-session[5895]: pam_unix(sshd:session): session closed for user core May 27 17:47:21.741327 systemd-logind[1979]: Session 14 logged out. Waiting for processes to exit. May 27 17:47:21.741650 systemd[1]: sshd@13-172.31.23.101:22-139.178.68.195:33146.service: Deactivated successfully. May 27 17:47:21.744481 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:47:21.747383 systemd-logind[1979]: Removed session 14. May 27 17:47:26.770317 systemd[1]: Started sshd@14-172.31.23.101:22-139.178.68.195:46842.service - OpenSSH per-connection server daemon (139.178.68.195:46842). May 27 17:47:27.081675 sshd[5918]: Accepted publickey for core from 139.178.68.195 port 46842 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:27.085145 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:27.091689 systemd-logind[1979]: New session 15 of user core. May 27 17:47:27.097635 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:47:27.541498 sshd[5920]: Connection closed by 139.178.68.195 port 46842 May 27 17:47:27.542055 sshd-session[5918]: pam_unix(sshd:session): session closed for user core May 27 17:47:27.548197 systemd[1]: sshd@14-172.31.23.101:22-139.178.68.195:46842.service: Deactivated successfully. May 27 17:47:27.552395 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:47:27.556072 systemd-logind[1979]: Session 15 logged out. Waiting for processes to exit. May 27 17:47:27.559117 systemd-logind[1979]: Removed session 15. May 27 17:47:27.603179 kubelet[3458]: E0527 17:47:27.602995 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:47:31.561605 containerd[2001]: time="2025-05-27T17:47:31.561388421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:47:31.745493 containerd[2001]: time="2025-05-27T17:47:31.745412254Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:47:31.748566 containerd[2001]: time="2025-05-27T17:47:31.748502891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:47:31.748898 containerd[2001]: time="2025-05-27T17:47:31.748531029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:47:31.761480 kubelet[3458]: E0527 17:47:31.754525 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:47:31.771186 kubelet[3458]: E0527 17:47:31.770918 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:47:31.774980 kubelet[3458]: E0527 17:47:31.774853 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c527f9838fbc47129edb353dd886be2b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:47:31.785752 containerd[2001]: time="2025-05-27T17:47:31.785693920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:47:31.970239 containerd[2001]: time="2025-05-27T17:47:31.969678449Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:47:31.973892 containerd[2001]: time="2025-05-27T17:47:31.973691342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:47:31.974071 kubelet[3458]: E0527 17:47:31.973962 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:47:31.974071 kubelet[3458]: E0527 17:47:31.974007 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:47:31.976422 kubelet[3458]: E0527 17:47:31.974146 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:47:31.976422 kubelet[3458]: E0527 17:47:31.975675 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:47:31.979042 containerd[2001]: time="2025-05-27T17:47:31.973738603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:47:32.573310 systemd[1]: Started sshd@15-172.31.23.101:22-139.178.68.195:46850.service - OpenSSH per-connection server daemon (139.178.68.195:46850). May 27 17:47:32.842530 sshd[5934]: Accepted publickey for core from 139.178.68.195 port 46850 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:32.847083 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:32.857816 systemd-logind[1979]: New session 16 of user core. May 27 17:47:32.862863 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:47:32.884372 containerd[2001]: time="2025-05-27T17:47:32.884337269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5\" id:\"cf1604613ea6ada6f60eb80ca0b4c8c889e20c5acfe10bdbff31bf658bae30a7\" pid:5949 exited_at:{seconds:1748368052 nanos:883608755}" May 27 17:47:33.720472 sshd[5955]: Connection closed by 139.178.68.195 port 46850 May 27 17:47:33.721125 sshd-session[5934]: pam_unix(sshd:session): session closed for user core May 27 17:47:33.726285 systemd[1]: sshd@15-172.31.23.101:22-139.178.68.195:46850.service: Deactivated successfully. May 27 17:47:33.728864 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:47:33.730232 systemd-logind[1979]: Session 16 logged out. Waiting for processes to exit. May 27 17:47:33.733187 systemd-logind[1979]: Removed session 16. May 27 17:47:38.754297 systemd[1]: Started sshd@16-172.31.23.101:22-139.178.68.195:34898.service - OpenSSH per-connection server daemon (139.178.68.195:34898). May 27 17:47:38.964979 sshd[5978]: Accepted publickey for core from 139.178.68.195 port 34898 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:38.966768 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:38.973280 systemd-logind[1979]: New session 17 of user core. May 27 17:47:38.978938 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:47:39.219941 containerd[2001]: time="2025-05-27T17:47:39.219692095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5\" id:\"5560171bc913652dfda8e124f859ba31042d66a744edef6c555243a547010fe8\" pid:5999 exited_at:{seconds:1748368059 nanos:217861796}" May 27 17:47:39.425644 sshd[5980]: Connection closed by 139.178.68.195 port 34898 May 27 17:47:39.427528 sshd-session[5978]: pam_unix(sshd:session): session closed for user core May 27 17:47:39.432416 systemd-logind[1979]: Session 17 logged out. Waiting for processes to exit. May 27 17:47:39.432911 systemd[1]: sshd@16-172.31.23.101:22-139.178.68.195:34898.service: Deactivated successfully. May 27 17:47:39.435160 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:47:39.438320 systemd-logind[1979]: Removed session 17. May 27 17:47:39.460990 systemd[1]: Started sshd@17-172.31.23.101:22-139.178.68.195:34902.service - OpenSSH per-connection server daemon (139.178.68.195:34902). May 27 17:47:39.663144 sshd[6013]: Accepted publickey for core from 139.178.68.195 port 34902 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:39.664869 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:39.671537 systemd-logind[1979]: New session 18 of user core. May 27 17:47:39.683832 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:47:40.493310 sshd[6015]: Connection closed by 139.178.68.195 port 34902 May 27 17:47:40.494552 sshd-session[6013]: pam_unix(sshd:session): session closed for user core May 27 17:47:40.499343 systemd-logind[1979]: Session 18 logged out. Waiting for processes to exit. May 27 17:47:40.500095 systemd[1]: sshd@17-172.31.23.101:22-139.178.68.195:34902.service: Deactivated successfully. May 27 17:47:40.505279 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:47:40.510202 containerd[2001]: time="2025-05-27T17:47:40.510166201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:47:40.528887 systemd-logind[1979]: Removed session 18. May 27 17:47:40.530568 systemd[1]: Started sshd@18-172.31.23.101:22-139.178.68.195:34914.service - OpenSSH per-connection server daemon (139.178.68.195:34914). May 27 17:47:40.684420 containerd[2001]: time="2025-05-27T17:47:40.684366892Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:47:40.686461 containerd[2001]: time="2025-05-27T17:47:40.686385113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:47:40.686758 containerd[2001]: time="2025-05-27T17:47:40.686634268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:47:40.687059 kubelet[3458]: E0527 17:47:40.687012 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:47:40.688225 kubelet[3458]: E0527 17:47:40.687065 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:47:40.688225 kubelet[3458]: E0527 17:47:40.687210 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwgp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-ph2ft_calico-system(0f4dab51-bbfc-4262-8440-a1bb42ffd9e6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:47:40.688987 kubelet[3458]: E0527 17:47:40.688777 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:47:40.718503 sshd[6025]: Accepted publickey for core from 139.178.68.195 port 34914 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:40.719557 sshd-session[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:40.725056 systemd-logind[1979]: New session 19 of user core. May 27 17:47:40.730692 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:47:42.036344 sshd[6027]: Connection closed by 139.178.68.195 port 34914 May 27 17:47:42.040397 sshd-session[6025]: pam_unix(sshd:session): session closed for user core May 27 17:47:42.049675 systemd-logind[1979]: Session 19 logged out. Waiting for processes to exit. May 27 17:47:42.051024 systemd[1]: sshd@18-172.31.23.101:22-139.178.68.195:34914.service: Deactivated successfully. May 27 17:47:42.055299 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:47:42.074153 systemd-logind[1979]: Removed session 19. May 27 17:47:42.075705 systemd[1]: Started sshd@19-172.31.23.101:22-139.178.68.195:34926.service - OpenSSH per-connection server daemon (139.178.68.195:34926). May 27 17:47:42.350928 sshd[6047]: Accepted publickey for core from 139.178.68.195 port 34926 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:42.352498 sshd-session[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:42.359258 systemd-logind[1979]: New session 20 of user core. May 27 17:47:42.364803 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:47:43.559073 sshd[6051]: Connection closed by 139.178.68.195 port 34926 May 27 17:47:43.562847 sshd-session[6047]: pam_unix(sshd:session): session closed for user core May 27 17:47:43.597673 systemd[1]: sshd@19-172.31.23.101:22-139.178.68.195:34926.service: Deactivated successfully. May 27 17:47:43.602295 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:47:43.608600 systemd-logind[1979]: Session 20 logged out. Waiting for processes to exit. May 27 17:47:43.610944 systemd[1]: Started sshd@20-172.31.23.101:22-139.178.68.195:49922.service - OpenSSH per-connection server daemon (139.178.68.195:49922). May 27 17:47:43.614718 systemd-logind[1979]: Removed session 20. May 27 17:47:43.809256 sshd[6062]: Accepted publickey for core from 139.178.68.195 port 49922 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:43.812361 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:43.821092 systemd-logind[1979]: New session 21 of user core. May 27 17:47:43.824657 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:47:44.142954 sshd[6064]: Connection closed by 139.178.68.195 port 49922 May 27 17:47:44.144686 sshd-session[6062]: pam_unix(sshd:session): session closed for user core May 27 17:47:44.149357 systemd[1]: sshd@20-172.31.23.101:22-139.178.68.195:49922.service: Deactivated successfully. May 27 17:47:44.151702 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:47:44.152925 systemd-logind[1979]: Session 21 logged out. Waiting for processes to exit. May 27 17:47:44.155206 systemd-logind[1979]: Removed session 21. May 27 17:47:47.513360 kubelet[3458]: E0527 17:47:47.513144 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:47:49.049996 containerd[2001]: time="2025-05-27T17:47:49.049554465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c18e48f4594e1d2310cfe92d66d05d0150925131c285d2fcbdd962f14772ce5\" id:\"124ef8086b738fb6e0dad9ed9bf292a90e67f472229ab826496bdaa1be742c06\" pid:6091 exited_at:{seconds:1748368069 nanos:48719923}" May 27 17:47:49.178306 systemd[1]: Started sshd@21-172.31.23.101:22-139.178.68.195:49932.service - OpenSSH per-connection server daemon (139.178.68.195:49932). May 27 17:47:49.407103 sshd[6102]: Accepted publickey for core from 139.178.68.195 port 49932 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:49.410336 sshd-session[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:49.418560 systemd-logind[1979]: New session 22 of user core. May 27 17:47:49.423754 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:47:49.950026 sshd[6104]: Connection closed by 139.178.68.195 port 49932 May 27 17:47:49.951834 sshd-session[6102]: pam_unix(sshd:session): session closed for user core May 27 17:47:49.955374 systemd[1]: sshd@21-172.31.23.101:22-139.178.68.195:49932.service: Deactivated successfully. May 27 17:47:49.958775 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:47:49.961343 systemd-logind[1979]: Session 22 logged out. Waiting for processes to exit. May 27 17:47:49.962901 systemd-logind[1979]: Removed session 22. May 27 17:47:54.987109 systemd[1]: Started sshd@22-172.31.23.101:22-139.178.68.195:49136.service - OpenSSH per-connection server daemon (139.178.68.195:49136). May 27 17:47:55.171485 sshd[6118]: Accepted publickey for core from 139.178.68.195 port 49136 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:47:55.173219 sshd-session[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:55.179923 systemd-logind[1979]: New session 23 of user core. May 27 17:47:55.184719 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:47:55.412443 sshd[6120]: Connection closed by 139.178.68.195 port 49136 May 27 17:47:55.414004 sshd-session[6118]: pam_unix(sshd:session): session closed for user core May 27 17:47:55.419650 systemd-logind[1979]: Session 23 logged out. Waiting for processes to exit. May 27 17:47:55.420731 systemd[1]: sshd@22-172.31.23.101:22-139.178.68.195:49136.service: Deactivated successfully. May 27 17:47:55.423785 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:47:55.426294 systemd-logind[1979]: Removed session 23. May 27 17:47:56.504467 kubelet[3458]: E0527 17:47:56.503942 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:48:00.451866 systemd[1]: Started sshd@23-172.31.23.101:22-139.178.68.195:49138.service - OpenSSH per-connection server daemon (139.178.68.195:49138). May 27 17:48:00.505773 kubelet[3458]: E0527 17:48:00.505596 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:48:00.727786 sshd[6134]: Accepted publickey for core from 139.178.68.195 port 49138 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:48:00.731591 sshd-session[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:00.741197 systemd-logind[1979]: New session 24 of user core. May 27 17:48:00.749679 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:48:02.978638 sshd[6136]: Connection closed by 139.178.68.195 port 49138 May 27 17:48:02.985284 sshd-session[6134]: pam_unix(sshd:session): session closed for user core May 27 17:48:02.997272 systemd[1]: sshd@23-172.31.23.101:22-139.178.68.195:49138.service: Deactivated successfully. May 27 17:48:02.997321 systemd-logind[1979]: Session 24 logged out. Waiting for processes to exit. May 27 17:48:03.003705 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:48:03.010620 systemd-logind[1979]: Removed session 24. May 27 17:48:07.530451 kubelet[3458]: E0527 17:48:07.529393 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ph2ft" podUID="0f4dab51-bbfc-4262-8440-a1bb42ffd9e6" May 27 17:48:08.020578 systemd[1]: Started sshd@24-172.31.23.101:22-139.178.68.195:45752.service - OpenSSH per-connection server daemon (139.178.68.195:45752). May 27 17:48:08.216312 sshd[6151]: Accepted publickey for core from 139.178.68.195 port 45752 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:48:08.220249 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:08.228754 systemd-logind[1979]: New session 25 of user core. May 27 17:48:08.236696 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:48:09.371466 sshd[6153]: Connection closed by 139.178.68.195 port 45752 May 27 17:48:09.368880 sshd-session[6151]: pam_unix(sshd:session): session closed for user core May 27 17:48:09.385986 systemd[1]: sshd@24-172.31.23.101:22-139.178.68.195:45752.service: Deactivated successfully. May 27 17:48:09.394268 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:48:09.412028 systemd-logind[1979]: Session 25 logged out. Waiting for processes to exit. May 27 17:48:09.420343 systemd-logind[1979]: Removed session 25. May 27 17:48:09.429452 containerd[2001]: time="2025-05-27T17:48:09.429381691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4fb0b04c0ede10c79b01248cfbe529bd45133f3d9fac56bc7648d6fd9c751b5\" id:\"83435f720bc08e44bdca6689c58f7bee26cc797c77807319dca2d83a49a7e0aa\" pid:6174 exited_at:{seconds:1748368089 nanos:362692137}" May 27 17:48:14.399774 systemd[1]: Started sshd@25-172.31.23.101:22-139.178.68.195:59204.service - OpenSSH per-connection server daemon (139.178.68.195:59204). May 27 17:48:14.561681 containerd[2001]: time="2025-05-27T17:48:14.561536973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:48:14.676096 sshd[6187]: Accepted publickey for core from 139.178.68.195 port 59204 ssh2: RSA SHA256:xw6q+PFEcsNuLPLiMS7m7EDfm5JfbDBKG0BT5SOw68k May 27 17:48:14.680620 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:14.689006 systemd-logind[1979]: New session 26 of user core. May 27 17:48:14.695751 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:48:14.763944 containerd[2001]: time="2025-05-27T17:48:14.763792649Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:48:14.766294 containerd[2001]: time="2025-05-27T17:48:14.766232641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:48:14.766491 containerd[2001]: time="2025-05-27T17:48:14.766292083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:48:14.785734 kubelet[3458]: E0527 17:48:14.766861 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:48:14.794223 kubelet[3458]: E0527 17:48:14.794153 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:48:14.800322 kubelet[3458]: E0527 17:48:14.800243 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c527f9838fbc47129edb353dd886be2b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:48:14.802795 containerd[2001]: time="2025-05-27T17:48:14.802488302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:48:14.982607 containerd[2001]: time="2025-05-27T17:48:14.982494370Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:48:14.984771 containerd[2001]: time="2025-05-27T17:48:14.984685302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:48:14.985365 containerd[2001]: time="2025-05-27T17:48:14.984861095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:48:14.985410 kubelet[3458]: E0527 17:48:14.985169 3458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:48:14.985410 kubelet[3458]: E0527 17:48:14.985210 3458 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:48:14.985410 kubelet[3458]: E0527 17:48:14.985314 3458 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6hd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dff597dcf-9vfrd_calico-system(901818f8-0dca-4e36-a31e-1e343d7aa13a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:48:14.986840 kubelet[3458]: E0527 17:48:14.986784 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5dff597dcf-9vfrd" podUID="901818f8-0dca-4e36-a31e-1e343d7aa13a" May 27 17:48:15.063542 sshd[6189]: Connection closed by 139.178.68.195 port 59204 May 27 17:48:15.064194 sshd-session[6187]: pam_unix(sshd:session): session closed for user core May 27 17:48:15.072098 systemd-logind[1979]: Session 26 logged out. Waiting for processes to exit. May 27 17:48:15.073179 systemd[1]: sshd@25-172.31.23.101:22-139.178.68.195:59204.service: Deactivated successfully. May 27 17:48:15.079217 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:48:15.084304 systemd-logind[1979]: Removed session 26.