May 27 03:18:54.921853 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:18:54.921890 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:18:54.921906 kernel: BIOS-provided physical RAM map: May 27 03:18:54.921918 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:18:54.921930 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 27 03:18:54.921942 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 27 03:18:54.921958 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 27 03:18:54.921971 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 27 03:18:54.921987 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 27 03:18:54.922000 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 27 03:18:54.922014 kernel: NX (Execute Disable) protection: active May 27 03:18:54.922026 kernel: APIC: Static calls initialized May 27 03:18:54.922040 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 27 03:18:54.922053 kernel: extended physical RAM map: May 27 03:18:54.922073 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:18:54.922087 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 27 03:18:54.922102 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 27 03:18:54.922116 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 27 03:18:54.922130 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 27 03:18:54.922144 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 27 03:18:54.922158 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 27 03:18:54.922172 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 27 03:18:54.922186 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 27 03:18:54.922222 kernel: efi: EFI v2.7 by EDK II May 27 03:18:54.922239 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 27 03:18:54.922251 kernel: secureboot: Secure boot disabled May 27 03:18:54.922262 kernel: SMBIOS 2.7 present. May 27 03:18:54.922273 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 27 03:18:54.922284 kernel: DMI: Memory slots populated: 1/1 May 27 03:18:54.922296 kernel: Hypervisor detected: KVM May 27 03:18:54.922308 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:18:54.922320 kernel: kvm-clock: using sched offset of 5064470843 cycles May 27 03:18:54.922334 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:18:54.922346 kernel: tsc: Detected 2500.004 MHz processor May 27 03:18:54.922359 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:18:54.922375 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:18:54.922387 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 27 03:18:54.922402 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:18:54.922415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:18:54.922427 kernel: Using GB pages for direct mapping May 27 03:18:54.922447 kernel: ACPI: Early table checksum verification disabled May 27 03:18:54.922464 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 27 03:18:54.922479 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 27 03:18:54.922494 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 27 03:18:54.922509 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 27 03:18:54.922524 kernel: ACPI: FACS 0x00000000789D0000 000040 May 27 03:18:54.922538 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 27 03:18:54.922553 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 27 03:18:54.922569 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 27 03:18:54.922587 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 27 03:18:54.922601 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 27 03:18:54.922616 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 27 03:18:54.922630 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 27 03:18:54.922646 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 27 03:18:54.922661 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 27 03:18:54.922676 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 27 03:18:54.922691 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 27 03:18:54.922710 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 27 03:18:54.922724 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 27 03:18:54.922740 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 27 03:18:54.922755 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 27 03:18:54.922770 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 27 03:18:54.922784 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 27 03:18:54.922799 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 27 03:18:54.922811 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 27 03:18:54.922823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 27 03:18:54.922836 kernel: NUMA: Initialized distance table, cnt=1 May 27 03:18:54.922855 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] May 27 03:18:54.922868 kernel: Zone ranges: May 27 03:18:54.922881 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:18:54.922894 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 27 03:18:54.922908 kernel: Normal empty May 27 03:18:54.922921 kernel: Device empty May 27 03:18:54.922935 kernel: Movable zone start for each node May 27 03:18:54.922949 kernel: Early memory node ranges May 27 03:18:54.922964 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 03:18:54.922982 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 27 03:18:54.922997 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 27 03:18:54.923011 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 27 03:18:54.923026 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:18:54.923041 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 03:18:54.923055 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 27 03:18:54.923070 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 27 03:18:54.923085 kernel: ACPI: PM-Timer IO Port: 0xb008 May 27 03:18:54.923100 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:18:54.923118 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 27 03:18:54.923132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:18:54.923147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:18:54.923162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:18:54.923177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:18:54.924248 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:18:54.924274 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:18:54.924290 kernel: TSC deadline timer available May 27 03:18:54.924305 kernel: CPU topo: Max. logical packages: 1 May 27 03:18:54.924319 kernel: CPU topo: Max. logical dies: 1 May 27 03:18:54.924338 kernel: CPU topo: Max. dies per package: 1 May 27 03:18:54.924351 kernel: CPU topo: Max. threads per core: 2 May 27 03:18:54.924361 kernel: CPU topo: Num. cores per package: 1 May 27 03:18:54.924373 kernel: CPU topo: Num. threads per package: 2 May 27 03:18:54.924384 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:18:54.924397 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:18:54.924409 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 27 03:18:54.924423 kernel: Booting paravirtualized kernel on KVM May 27 03:18:54.924436 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:18:54.924453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:18:54.924465 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:18:54.924479 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:18:54.924492 kernel: pcpu-alloc: [0] 0 1 May 27 03:18:54.924506 kernel: kvm-guest: PV spinlocks enabled May 27 03:18:54.924521 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:18:54.924539 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:18:54.924553 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:18:54.924568 kernel: random: crng init done May 27 03:18:54.924583 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:18:54.924597 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 03:18:54.924611 kernel: Fallback order for Node 0: 0 May 27 03:18:54.924624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 May 27 03:18:54.924637 kernel: Policy zone: DMA32 May 27 03:18:54.924662 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:18:54.924676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:18:54.924690 kernel: Kernel/User page tables isolation: enabled May 27 03:18:54.924704 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:18:54.924718 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:18:54.924735 kernel: Dynamic Preempt: voluntary May 27 03:18:54.924749 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:18:54.924766 kernel: rcu: RCU event tracing is enabled. May 27 03:18:54.924780 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:18:54.924794 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:18:54.924810 kernel: Rude variant of Tasks RCU enabled. May 27 03:18:54.924829 kernel: Tracing variant of Tasks RCU enabled. May 27 03:18:54.924844 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:18:54.924860 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:18:54.924876 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:18:54.924891 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:18:54.924907 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:18:54.924923 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 03:18:54.924939 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:18:54.924957 kernel: Console: colour dummy device 80x25 May 27 03:18:54.924972 kernel: printk: legacy console [tty0] enabled May 27 03:18:54.924987 kernel: printk: legacy console [ttyS0] enabled May 27 03:18:54.925003 kernel: ACPI: Core revision 20240827 May 27 03:18:54.925019 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 27 03:18:54.925035 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:18:54.925050 kernel: x2apic enabled May 27 03:18:54.925066 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:18:54.925083 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns May 27 03:18:54.925102 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) May 27 03:18:54.925118 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 27 03:18:54.925134 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 27 03:18:54.925150 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:18:54.925166 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:18:54.925181 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:18:54.925213 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 27 03:18:54.925229 kernel: RETBleed: Vulnerable May 27 03:18:54.925244 kernel: Speculative Store Bypass: Vulnerable May 27 03:18:54.925259 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 27 03:18:54.925275 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 03:18:54.925293 kernel: GDS: Unknown: Dependent on hypervisor status May 27 03:18:54.925308 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 03:18:54.925324 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:18:54.925339 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:18:54.925354 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:18:54.925370 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 27 03:18:54.925385 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 27 03:18:54.925401 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 27 03:18:54.925417 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 27 03:18:54.925432 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 27 03:18:54.925450 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 27 03:18:54.925466 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:18:54.925481 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 27 03:18:54.925497 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 27 03:18:54.925512 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 27 03:18:54.925527 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 27 03:18:54.925542 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 27 03:18:54.925558 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 27 03:18:54.925573 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 27 03:18:54.925588 kernel: Freeing SMP alternatives memory: 32K May 27 03:18:54.925604 kernel: pid_max: default: 32768 minimum: 301 May 27 03:18:54.925619 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:18:54.925637 kernel: landlock: Up and running. May 27 03:18:54.925652 kernel: SELinux: Initializing. May 27 03:18:54.925668 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 03:18:54.925683 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 03:18:54.925699 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 27 03:18:54.925715 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 27 03:18:54.925730 kernel: signal: max sigframe size: 3632 May 27 03:18:54.925746 kernel: rcu: Hierarchical SRCU implementation. May 27 03:18:54.925762 kernel: rcu: Max phase no-delay instances is 400. May 27 03:18:54.925778 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:18:54.925797 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 03:18:54.925813 kernel: smp: Bringing up secondary CPUs ... May 27 03:18:54.925828 kernel: smpboot: x86: Booting SMP configuration: May 27 03:18:54.925844 kernel: .... node #0, CPUs: #1 May 27 03:18:54.925860 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 27 03:18:54.925878 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 27 03:18:54.925893 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:18:54.925908 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) May 27 03:18:54.925924 kernel: Memory: 1908044K/2037804K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125196K reserved, 0K cma-reserved) May 27 03:18:54.925943 kernel: devtmpfs: initialized May 27 03:18:54.925958 kernel: x86/mm: Memory block size: 128MB May 27 03:18:54.925974 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 27 03:18:54.925990 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:18:54.926006 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:18:54.926021 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:18:54.926037 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:18:54.926052 kernel: audit: initializing netlink subsys (disabled) May 27 03:18:54.926071 kernel: audit: type=2000 audit(1748315932.784:1): state=initialized audit_enabled=0 res=1 May 27 03:18:54.926087 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:18:54.926102 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:18:54.926118 kernel: cpuidle: using governor menu May 27 03:18:54.926134 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:18:54.926150 kernel: dca service started, version 1.12.1 May 27 03:18:54.926166 kernel: PCI: Using configuration type 1 for base access May 27 03:18:54.926182 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:18:54.926221 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:18:54.926241 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:18:54.926256 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:18:54.926272 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:18:54.926287 kernel: ACPI: Added _OSI(Module Device) May 27 03:18:54.926303 kernel: ACPI: Added _OSI(Processor Device) May 27 03:18:54.926319 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:18:54.926335 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:18:54.926350 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 27 03:18:54.926366 kernel: ACPI: Interpreter enabled May 27 03:18:54.926385 kernel: ACPI: PM: (supports S0 S5) May 27 03:18:54.926400 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:18:54.926417 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:18:54.926433 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:18:54.926448 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 03:18:54.926465 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:18:54.926691 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 03:18:54.926847 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 03:18:54.926987 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 03:18:54.927006 kernel: acpiphp: Slot [3] registered May 27 03:18:54.927023 kernel: acpiphp: Slot [4] registered May 27 03:18:54.927039 kernel: acpiphp: Slot [5] registered May 27 03:18:54.927054 kernel: acpiphp: Slot [6] registered May 27 03:18:54.927070 kernel: acpiphp: Slot [7] registered May 27 03:18:54.927086 kernel: acpiphp: Slot [8] registered May 27 03:18:54.927102 kernel: acpiphp: Slot [9] registered May 27 03:18:54.927118 kernel: acpiphp: Slot [10] registered May 27 03:18:54.927136 kernel: acpiphp: Slot [11] registered May 27 03:18:54.927153 kernel: acpiphp: Slot [12] registered May 27 03:18:54.927169 kernel: acpiphp: Slot [13] registered May 27 03:18:54.927185 kernel: acpiphp: Slot [14] registered May 27 03:18:54.927965 kernel: acpiphp: Slot [15] registered May 27 03:18:54.927983 kernel: acpiphp: Slot [16] registered May 27 03:18:54.927997 kernel: acpiphp: Slot [17] registered May 27 03:18:54.928011 kernel: acpiphp: Slot [18] registered May 27 03:18:54.928028 kernel: acpiphp: Slot [19] registered May 27 03:18:54.928047 kernel: acpiphp: Slot [20] registered May 27 03:18:54.928063 kernel: acpiphp: Slot [21] registered May 27 03:18:54.928079 kernel: acpiphp: Slot [22] registered May 27 03:18:54.928095 kernel: acpiphp: Slot [23] registered May 27 03:18:54.928109 kernel: acpiphp: Slot [24] registered May 27 03:18:54.928122 kernel: acpiphp: Slot [25] registered May 27 03:18:54.928137 kernel: acpiphp: Slot [26] registered May 27 03:18:54.928150 kernel: acpiphp: Slot [27] registered May 27 03:18:54.928166 kernel: acpiphp: Slot [28] registered May 27 03:18:54.928181 kernel: acpiphp: Slot [29] registered May 27 03:18:54.928231 kernel: acpiphp: Slot [30] registered May 27 03:18:54.928246 kernel: acpiphp: Slot [31] registered May 27 03:18:54.928260 kernel: PCI host bridge to bus 0000:00 May 27 03:18:54.928465 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:18:54.928609 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:18:54.928736 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:18:54.928863 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 27 03:18:54.928981 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 27 03:18:54.929097 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:18:54.929697 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 03:18:54.929876 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 03:18:54.930045 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint May 27 03:18:54.930211 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 27 03:18:54.930362 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 27 03:18:54.930499 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 27 03:18:54.930635 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 27 03:18:54.930770 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 27 03:18:54.930907 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 27 03:18:54.931045 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 27 03:18:54.931188 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:18:54.932118 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] May 27 03:18:54.932276 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:18:54.932410 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:18:54.932551 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint May 27 03:18:54.932684 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] May 27 03:18:54.932822 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint May 27 03:18:54.932953 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] May 27 03:18:54.932976 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:18:54.932992 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:18:54.933007 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:18:54.933022 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:18:54.933037 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 03:18:54.933052 kernel: iommu: Default domain type: Translated May 27 03:18:54.933067 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:18:54.933081 kernel: efivars: Registered efivars operations May 27 03:18:54.933100 kernel: PCI: Using ACPI for IRQ routing May 27 03:18:54.933115 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:18:54.933129 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 27 03:18:54.933144 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 27 03:18:54.933159 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 27 03:18:54.933316 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 27 03:18:54.933450 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 27 03:18:54.933582 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:18:54.933600 kernel: vgaarb: loaded May 27 03:18:54.933620 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 27 03:18:54.933635 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 27 03:18:54.933649 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:18:54.933664 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:18:54.933679 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:18:54.933695 kernel: pnp: PnP ACPI init May 27 03:18:54.933710 kernel: pnp: PnP ACPI: found 5 devices May 27 03:18:54.933725 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:18:54.933740 kernel: NET: Registered PF_INET protocol family May 27 03:18:54.933758 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:18:54.933773 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 03:18:54.933788 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:18:54.933803 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 03:18:54.933818 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 03:18:54.933833 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 03:18:54.933847 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 03:18:54.933862 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 03:18:54.933880 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:18:54.933896 kernel: NET: Registered PF_XDP protocol family May 27 03:18:54.934019 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:18:54.934138 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:18:54.934273 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:18:54.934392 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 27 03:18:54.934508 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 27 03:18:54.934642 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 03:18:54.934661 kernel: PCI: CLS 0 bytes, default 64 May 27 03:18:54.934680 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 03:18:54.934695 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns May 27 03:18:54.934709 kernel: clocksource: Switched to clocksource tsc May 27 03:18:54.934724 kernel: Initialise system trusted keyrings May 27 03:18:54.934739 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 03:18:54.934754 kernel: Key type asymmetric registered May 27 03:18:54.934769 kernel: Asymmetric key parser 'x509' registered May 27 03:18:54.934784 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:18:54.934799 kernel: io scheduler mq-deadline registered May 27 03:18:54.934816 kernel: io scheduler kyber registered May 27 03:18:54.934831 kernel: io scheduler bfq registered May 27 03:18:54.934846 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:18:54.934861 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:18:54.934877 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:18:54.934892 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:18:54.934906 kernel: i8042: Warning: Keylock active May 27 03:18:54.934921 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:18:54.934936 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:18:54.935079 kernel: rtc_cmos 00:00: RTC can wake from S4 May 27 03:18:54.935223 kernel: rtc_cmos 00:00: registered as rtc0 May 27 03:18:54.935360 kernel: rtc_cmos 00:00: setting system clock to 2025-05-27T03:18:54 UTC (1748315934) May 27 03:18:54.935483 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 27 03:18:54.935527 kernel: intel_pstate: CPU model not supported May 27 03:18:54.935545 kernel: efifb: probing for efifb May 27 03:18:54.935561 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 27 03:18:54.935580 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 27 03:18:54.935595 kernel: efifb: scrolling: redraw May 27 03:18:54.935611 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:18:54.935627 kernel: Console: switching to colour frame buffer device 100x37 May 27 03:18:54.935643 kernel: fb0: EFI VGA frame buffer device May 27 03:18:54.935658 kernel: pstore: Using crash dump compression: deflate May 27 03:18:54.935674 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:18:54.935690 kernel: NET: Registered PF_INET6 protocol family May 27 03:18:54.935705 kernel: Segment Routing with IPv6 May 27 03:18:54.935721 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:18:54.935735 kernel: NET: Registered PF_PACKET protocol family May 27 03:18:54.935748 kernel: Key type dns_resolver registered May 27 03:18:54.935761 kernel: IPI shorthand broadcast: enabled May 27 03:18:54.935774 kernel: sched_clock: Marking stable (2652002253, 154208584)->(2901922968, -95712131) May 27 03:18:54.935788 kernel: registered taskstats version 1 May 27 03:18:54.935804 kernel: Loading compiled-in X.509 certificates May 27 03:18:54.935817 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:18:54.935835 kernel: Demotion targets for Node 0: null May 27 03:18:54.935850 kernel: Key type .fscrypt registered May 27 03:18:54.935868 kernel: Key type fscrypt-provisioning registered May 27 03:18:54.935884 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:18:54.935900 kernel: ima: Allocated hash algorithm: sha1 May 27 03:18:54.935916 kernel: ima: No architecture policies found May 27 03:18:54.935932 kernel: clk: Disabling unused clocks May 27 03:18:54.935947 kernel: Warning: unable to open an initial console. May 27 03:18:54.935961 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:18:54.935977 kernel: Write protecting the kernel read-only data: 24576k May 27 03:18:54.935997 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:18:54.936014 kernel: Run /init as init process May 27 03:18:54.936028 kernel: with arguments: May 27 03:18:54.936045 kernel: /init May 27 03:18:54.936060 kernel: with environment: May 27 03:18:54.936075 kernel: HOME=/ May 27 03:18:54.936093 kernel: TERM=linux May 27 03:18:54.936108 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:18:54.936125 systemd[1]: Successfully made /usr/ read-only. May 27 03:18:54.936146 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:18:54.936163 systemd[1]: Detected virtualization amazon. May 27 03:18:54.936178 systemd[1]: Detected architecture x86-64. May 27 03:18:54.936235 systemd[1]: Running in initrd. May 27 03:18:54.936257 systemd[1]: No hostname configured, using default hostname. May 27 03:18:54.936276 systemd[1]: Hostname set to . May 27 03:18:54.936295 systemd[1]: Initializing machine ID from VM UUID. May 27 03:18:54.936313 systemd[1]: Queued start job for default target initrd.target. May 27 03:18:54.936332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:18:54.936351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:18:54.936370 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:18:54.936388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:18:54.936410 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:18:54.936429 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:18:54.936450 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:18:54.936468 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:18:54.936487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:18:54.936505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:18:54.936523 systemd[1]: Reached target paths.target - Path Units. May 27 03:18:54.936545 systemd[1]: Reached target slices.target - Slice Units. May 27 03:18:54.936563 systemd[1]: Reached target swap.target - Swaps. May 27 03:18:54.936580 systemd[1]: Reached target timers.target - Timer Units. May 27 03:18:54.936597 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:18:54.936613 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:18:54.936629 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:18:54.936646 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:18:54.936663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:18:54.936683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:18:54.936698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:18:54.936714 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:18:54.936730 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:18:54.936746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:18:54.936763 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:18:54.936783 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:18:54.936800 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:18:54.936819 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:18:54.936841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:18:54.936859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:18:54.936877 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:18:54.936895 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:18:54.936915 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:18:54.936931 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:18:54.936986 systemd-journald[207]: Collecting audit messages is disabled. May 27 03:18:54.937024 systemd-journald[207]: Journal started May 27 03:18:54.937060 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2416075c2fbd6af0fc8c2ab03f0c46) is 4.8M, max 38.4M, 33.6M free. May 27 03:18:54.940222 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:18:54.944353 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:18:54.949425 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:18:54.954982 systemd-modules-load[208]: Inserted module 'overlay' May 27 03:18:54.956445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:18:54.959617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:18:54.965544 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:18:54.981441 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:18:54.990182 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:18:54.998279 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:18:55.004273 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 27 03:18:55.006265 kernel: Bridge firewalling registered May 27 03:18:55.006708 systemd-modules-load[208]: Inserted module 'br_netfilter' May 27 03:18:55.007629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:18:55.009549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:18:55.013330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:18:55.027486 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:18:55.033358 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:18:55.040675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:18:55.046066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:18:55.059804 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:18:55.105260 systemd-resolved[247]: Positive Trust Anchors: May 27 03:18:55.106267 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:18:55.106337 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:18:55.115450 systemd-resolved[247]: Defaulting to hostname 'linux'. May 27 03:18:55.116924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:18:55.118221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:18:55.163227 kernel: SCSI subsystem initialized May 27 03:18:55.173219 kernel: Loading iSCSI transport class v2.0-870. May 27 03:18:55.185224 kernel: iscsi: registered transport (tcp) May 27 03:18:55.207235 kernel: iscsi: registered transport (qla4xxx) May 27 03:18:55.207365 kernel: QLogic iSCSI HBA Driver May 27 03:18:55.226390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:18:55.246173 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:18:55.248901 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:18:55.293328 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:18:55.295657 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:18:55.349226 kernel: raid6: avx512x4 gen() 17935 MB/s May 27 03:18:55.367217 kernel: raid6: avx512x2 gen() 17939 MB/s May 27 03:18:55.385219 kernel: raid6: avx512x1 gen() 17825 MB/s May 27 03:18:55.403216 kernel: raid6: avx2x4 gen() 17865 MB/s May 27 03:18:55.421217 kernel: raid6: avx2x2 gen() 17834 MB/s May 27 03:18:55.439521 kernel: raid6: avx2x1 gen() 13703 MB/s May 27 03:18:55.439577 kernel: raid6: using algorithm avx512x2 gen() 17939 MB/s May 27 03:18:55.458452 kernel: raid6: .... xor() 24728 MB/s, rmw enabled May 27 03:18:55.458514 kernel: raid6: using avx512x2 recovery algorithm May 27 03:18:55.479223 kernel: xor: automatically using best checksumming function avx May 27 03:18:55.647223 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:18:55.653945 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:18:55.656044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:18:55.683732 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 27 03:18:55.690442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:18:55.694468 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:18:55.717469 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation May 27 03:18:55.743829 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:18:55.745717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:18:55.806474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:18:55.809380 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:18:55.903681 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 27 03:18:55.903973 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 27 03:18:55.911222 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 27 03:18:55.926717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:18:55.926934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:18:55.929410 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:c8:a2:90:f4:8f May 27 03:18:55.930302 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:18:55.932768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:18:55.934513 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:18:55.937690 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:18:55.956055 kernel: AES CTR mode by8 optimization enabled May 27 03:18:55.956116 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 27 03:18:55.957145 (udev-worker)[503]: Network interface NamePolicy= disabled on kernel command line. May 27 03:18:55.960636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:18:55.961601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:18:55.983318 kernel: nvme nvme0: pci function 0000:00:04.0 May 27 03:18:55.983544 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 03:18:55.991516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:18:55.994316 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 03:18:56.000629 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:18:56.000692 kernel: GPT:9289727 != 16777215 May 27 03:18:56.000714 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:18:56.000732 kernel: GPT:9289727 != 16777215 May 27 03:18:56.000757 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:18:56.000777 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:18:56.028720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:18:56.034227 kernel: nvme nvme0: using unchecked data buffer May 27 03:18:56.103531 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 27 03:18:56.138686 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 27 03:18:56.140740 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 27 03:18:56.143927 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:18:56.168669 disk-uuid[686]: Primary Header is updated. May 27 03:18:56.168669 disk-uuid[686]: Secondary Entries is updated. May 27 03:18:56.168669 disk-uuid[686]: Secondary Header is updated. May 27 03:18:56.202305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 03:18:56.220550 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 27 03:18:56.323092 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:18:56.329037 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:18:56.329581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:18:56.330819 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:18:56.332651 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:18:56.364352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:18:57.191396 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:18:57.192187 disk-uuid[688]: The operation has completed successfully. May 27 03:18:57.318401 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:18:57.318526 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:18:57.363968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:18:57.382015 sh[876]: Success May 27 03:18:57.408415 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:18:57.408489 kernel: device-mapper: uevent: version 1.0.3 May 27 03:18:57.411421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:18:57.421218 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 03:18:57.522612 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:18:57.527428 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:18:57.538144 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:18:57.560555 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:18:57.560615 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (900) May 27 03:18:57.567209 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:18:57.567356 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:18:57.567372 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:18:57.619050 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:18:57.620116 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:18:57.620701 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:18:57.621461 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:18:57.623085 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:18:57.662242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (933) May 27 03:18:57.668604 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:18:57.668685 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:18:57.668711 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:18:57.685256 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:18:57.686660 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:18:57.689531 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:18:57.747064 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:18:57.750599 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:18:57.798532 systemd-networkd[1069]: lo: Link UP May 27 03:18:57.800972 systemd-networkd[1069]: lo: Gained carrier May 27 03:18:57.803339 systemd-networkd[1069]: Enumeration completed May 27 03:18:57.804463 systemd-networkd[1069]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:18:57.804469 systemd-networkd[1069]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:18:57.806321 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:18:57.808095 systemd[1]: Reached target network.target - Network. May 27 03:18:57.811592 systemd-networkd[1069]: eth0: Link UP May 27 03:18:57.811597 systemd-networkd[1069]: eth0: Gained carrier May 27 03:18:57.811615 systemd-networkd[1069]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:18:57.824302 systemd-networkd[1069]: eth0: DHCPv4 address 172.31.29.86/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 03:18:57.903723 ignition[1016]: Ignition 2.21.0 May 27 03:18:57.903739 ignition[1016]: Stage: fetch-offline May 27 03:18:57.903969 ignition[1016]: no configs at "/usr/lib/ignition/base.d" May 27 03:18:57.903980 ignition[1016]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:18:57.904950 ignition[1016]: Ignition finished successfully May 27 03:18:57.907410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:18:57.908985 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:18:57.937473 ignition[1079]: Ignition 2.21.0 May 27 03:18:57.937484 ignition[1079]: Stage: fetch May 27 03:18:57.937762 ignition[1079]: no configs at "/usr/lib/ignition/base.d" May 27 03:18:57.937771 ignition[1079]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:18:57.937853 ignition[1079]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:18:57.979562 ignition[1079]: PUT result: OK May 27 03:18:57.983004 ignition[1079]: parsed url from cmdline: "" May 27 03:18:57.983017 ignition[1079]: no config URL provided May 27 03:18:57.983035 ignition[1079]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:18:57.983051 ignition[1079]: no config at "/usr/lib/ignition/user.ign" May 27 03:18:57.983077 ignition[1079]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:18:57.984598 ignition[1079]: PUT result: OK May 27 03:18:57.984671 ignition[1079]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 27 03:18:57.985964 ignition[1079]: GET result: OK May 27 03:18:57.986133 ignition[1079]: parsing config with SHA512: b1e3873eebf07a842cea3789b0c8d3afce7b239b72acfc5ece17360572da9004896f936f96f357dc364293e1e0a4e261b4a848b8fb512f8a60da7aebcd520089 May 27 03:18:57.993449 unknown[1079]: fetched base config from "system" May 27 03:18:57.993466 unknown[1079]: fetched base config from "system" May 27 03:18:57.993998 ignition[1079]: fetch: fetch complete May 27 03:18:57.993473 unknown[1079]: fetched user config from "aws" May 27 03:18:57.994006 ignition[1079]: fetch: fetch passed May 27 03:18:57.994065 ignition[1079]: Ignition finished successfully May 27 03:18:57.997483 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:18:57.998935 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:18:58.029365 ignition[1086]: Ignition 2.21.0 May 27 03:18:58.029380 ignition[1086]: Stage: kargs May 27 03:18:58.029779 ignition[1086]: no configs at "/usr/lib/ignition/base.d" May 27 03:18:58.029791 ignition[1086]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:18:58.029909 ignition[1086]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:18:58.033791 ignition[1086]: PUT result: OK May 27 03:18:58.039047 ignition[1086]: kargs: kargs passed May 27 03:18:58.039605 ignition[1086]: Ignition finished successfully May 27 03:18:58.041297 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:18:58.042628 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:18:58.068351 ignition[1092]: Ignition 2.21.0 May 27 03:18:58.068367 ignition[1092]: Stage: disks May 27 03:18:58.068752 ignition[1092]: no configs at "/usr/lib/ignition/base.d" May 27 03:18:58.068765 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:18:58.068871 ignition[1092]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:18:58.069776 ignition[1092]: PUT result: OK May 27 03:18:58.073345 ignition[1092]: disks: disks passed May 27 03:18:58.073875 ignition[1092]: Ignition finished successfully May 27 03:18:58.075764 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:18:58.076407 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:18:58.076785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:18:58.077355 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:18:58.077903 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:18:58.078473 systemd[1]: Reached target basic.target - Basic System. May 27 03:18:58.080256 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:18:58.131720 systemd-fsck[1101]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:18:58.134775 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:18:58.136883 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:18:58.280235 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:18:58.281003 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:18:58.281903 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:18:58.283822 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:18:58.286306 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:18:58.287568 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:18:58.288633 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:18:58.288660 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:18:58.295716 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:18:58.297795 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:18:58.321220 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1120) May 27 03:18:58.324267 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:18:58.324340 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:18:58.326549 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:18:58.338820 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:18:58.391602 initrd-setup-root[1144]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:18:58.398744 initrd-setup-root[1151]: cut: /sysroot/etc/group: No such file or directory May 27 03:18:58.404240 initrd-setup-root[1158]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:18:58.408656 initrd-setup-root[1165]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:18:58.541179 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:18:58.543114 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:18:58.546342 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:18:58.566686 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:18:58.569503 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:18:58.600864 ignition[1236]: INFO : Ignition 2.21.0 May 27 03:18:58.601727 ignition[1236]: INFO : Stage: mount May 27 03:18:58.601727 ignition[1236]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:18:58.601727 ignition[1236]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:18:58.603392 ignition[1236]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:18:58.604422 ignition[1236]: INFO : PUT result: OK May 27 03:18:58.605770 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:18:58.608294 ignition[1236]: INFO : mount: mount passed May 27 03:18:58.608876 ignition[1236]: INFO : Ignition finished successfully May 27 03:18:58.610266 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:18:58.611934 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:18:58.627645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:18:58.664228 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1248) May 27 03:18:58.668208 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:18:58.668279 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:18:58.668296 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:18:58.676606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:18:58.708524 ignition[1264]: INFO : Ignition 2.21.0 May 27 03:18:58.708524 ignition[1264]: INFO : Stage: files May 27 03:18:58.710158 ignition[1264]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:18:58.710158 ignition[1264]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:18:58.710158 ignition[1264]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:18:58.710158 ignition[1264]: INFO : PUT result: OK May 27 03:18:58.712364 ignition[1264]: DEBUG : files: compiled without relabeling support, skipping May 27 03:18:58.713762 ignition[1264]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:18:58.713762 ignition[1264]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:18:58.717175 ignition[1264]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:18:58.717884 ignition[1264]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:18:58.718777 ignition[1264]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:18:58.718328 unknown[1264]: wrote ssh authorized keys file for user: core May 27 03:18:58.721671 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 03:18:58.722394 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 27 03:18:59.037850 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:18:59.160428 systemd-networkd[1069]: eth0: Gained IPv6LL May 27 03:19:00.021481 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:19:00.022626 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:19:00.037623 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:19:00.037623 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:19:00.037623 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:00.037623 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:00.037623 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:00.037623 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 27 03:19:00.792393 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:19:02.495987 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:02.495987 ignition[1264]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:19:02.499126 ignition[1264]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:19:02.504782 ignition[1264]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:19:02.504782 ignition[1264]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:19:02.504782 ignition[1264]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 03:19:02.508921 ignition[1264]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:19:02.508921 ignition[1264]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:19:02.508921 ignition[1264]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:19:02.508921 ignition[1264]: INFO : files: files passed May 27 03:19:02.508921 ignition[1264]: INFO : Ignition finished successfully May 27 03:19:02.506793 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:19:02.510482 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:19:02.513346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:19:02.523143 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:19:02.524278 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:19:02.531135 initrd-setup-root-after-ignition[1295]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:19:02.533108 initrd-setup-root-after-ignition[1295]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:19:02.535289 initrd-setup-root-after-ignition[1299]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:19:02.535941 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:19:02.537567 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:19:02.539841 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:19:02.601954 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:19:02.602102 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:19:02.603426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:19:02.604569 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:19:02.605423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:19:02.606585 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:19:02.631175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:19:02.633412 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:19:02.656475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:19:02.657176 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:19:02.658290 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:19:02.659149 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:19:02.659548 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:19:02.660613 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:19:02.661526 systemd[1]: Stopped target basic.target - Basic System. May 27 03:19:02.662334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:19:02.663114 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:19:02.664019 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:19:02.665419 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:19:02.666908 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:19:02.669452 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:19:02.670096 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:19:02.672050 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:19:02.672904 systemd[1]: Stopped target swap.target - Swaps. May 27 03:19:02.673742 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:19:02.673936 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:19:02.675042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:19:02.676042 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:19:02.676724 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:19:02.676854 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:19:02.677524 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:19:02.677748 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:19:02.678771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:19:02.678971 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:19:02.679808 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:19:02.680011 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:19:02.683465 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:19:02.684236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:19:02.684480 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:19:02.687493 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:19:02.690320 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:19:02.690614 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:19:02.691537 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:19:02.691747 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:19:02.697919 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:19:02.700253 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:19:02.719301 ignition[1319]: INFO : Ignition 2.21.0 May 27 03:19:02.719301 ignition[1319]: INFO : Stage: umount May 27 03:19:02.721362 ignition[1319]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:19:02.721362 ignition[1319]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:19:02.721362 ignition[1319]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:19:02.721362 ignition[1319]: INFO : PUT result: OK May 27 03:19:02.726517 ignition[1319]: INFO : umount: umount passed May 27 03:19:02.727311 ignition[1319]: INFO : Ignition finished successfully May 27 03:19:02.728779 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:19:02.728934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:19:02.729733 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:19:02.729798 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:19:02.730452 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:19:02.730513 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:19:02.731396 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:19:02.731456 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:19:02.732381 systemd[1]: Stopped target network.target - Network. May 27 03:19:02.732837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:19:02.732895 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:19:02.733398 systemd[1]: Stopped target paths.target - Path Units. May 27 03:19:02.733673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:19:02.735308 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:19:02.735883 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:19:02.736345 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:19:02.736768 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:19:02.736809 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:19:02.737103 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:19:02.737138 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:19:02.737746 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:19:02.737822 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:19:02.738577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:19:02.738634 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:19:02.742783 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:19:02.743433 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:19:02.747646 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:19:02.748523 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:19:02.748720 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:19:02.753670 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:19:02.754065 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:19:02.754217 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:19:02.756331 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:19:02.757734 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:19:02.758214 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:19:02.758279 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:19:02.760063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:19:02.760584 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:19:02.760661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:19:02.761330 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:19:02.761389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:19:02.762183 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:19:02.762346 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:19:02.762993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:19:02.763056 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:19:02.767437 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:19:02.769575 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:19:02.769673 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:19:02.776167 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:19:02.777110 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:19:02.779034 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:19:02.779122 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:19:02.781343 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:19:02.781401 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:19:02.782167 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:19:02.782259 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:19:02.783053 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:19:02.783112 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:19:02.785589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:19:02.785667 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:19:02.790284 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:19:02.791815 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:19:02.791918 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:19:02.794490 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:19:02.794570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:19:02.796876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:19:02.796954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:02.801158 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:19:02.801275 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:19:02.801341 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:19:02.801844 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:19:02.804319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:19:02.811796 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:19:02.811927 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:19:02.889161 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:19:02.889298 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:19:02.890055 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:19:02.890680 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:19:02.890747 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:19:02.894055 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:19:02.927863 systemd[1]: Switching root. May 27 03:19:02.960121 systemd-journald[207]: Journal stopped May 27 03:19:04.380298 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). May 27 03:19:04.380393 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:19:04.380426 kernel: SELinux: policy capability open_perms=1 May 27 03:19:04.380444 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:19:04.380460 kernel: SELinux: policy capability always_check_network=0 May 27 03:19:04.380477 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:19:04.380496 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:19:04.380513 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:19:04.380536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:19:04.380553 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:19:04.380575 kernel: audit: type=1403 audit(1748315943.160:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:19:04.380598 systemd[1]: Successfully loaded SELinux policy in 52.952ms. May 27 03:19:04.380629 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.549ms. May 27 03:19:04.380649 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:19:04.380670 systemd[1]: Detected virtualization amazon. May 27 03:19:04.380688 systemd[1]: Detected architecture x86-64. May 27 03:19:04.380706 systemd[1]: Detected first boot. May 27 03:19:04.380725 systemd[1]: Initializing machine ID from VM UUID. May 27 03:19:04.380743 zram_generator::config[1362]: No configuration found. May 27 03:19:04.380763 kernel: Guest personality initialized and is inactive May 27 03:19:04.380782 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:19:04.380800 kernel: Initialized host personality May 27 03:19:04.380816 kernel: NET: Registered PF_VSOCK protocol family May 27 03:19:04.380834 systemd[1]: Populated /etc with preset unit settings. May 27 03:19:04.380854 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:19:04.380872 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:19:04.380890 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:19:04.380909 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:19:04.380937 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:19:04.380956 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:19:04.380974 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:19:04.380993 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:19:04.381012 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:19:04.381030 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:19:04.381051 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:19:04.381071 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:19:04.381088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:19:04.381109 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:19:04.381127 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:19:04.381146 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:19:04.381165 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:19:04.381184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:19:04.381216 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:19:04.381235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:19:04.381254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:19:04.381276 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:19:04.381295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:19:04.381313 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:19:04.381331 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:19:04.381350 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:19:04.381372 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:19:04.381391 systemd[1]: Reached target slices.target - Slice Units. May 27 03:19:04.381410 systemd[1]: Reached target swap.target - Swaps. May 27 03:19:04.381428 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:19:04.381448 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:19:04.381467 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:19:04.381485 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:19:04.381504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:19:04.381521 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:19:04.381539 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:19:04.381558 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:19:04.381576 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:19:04.381595 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:19:04.381616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:04.381634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:19:04.381652 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:19:04.381670 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:19:04.381692 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:19:04.381710 systemd[1]: Reached target machines.target - Containers. May 27 03:19:04.381728 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:19:04.381747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:04.381768 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:19:04.381787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:19:04.381805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:19:04.381823 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:19:04.381842 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:19:04.381859 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:19:04.381878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:19:04.381897 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:19:04.381915 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:19:04.381935 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:19:04.381951 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:19:04.381969 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:19:04.381986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:04.382004 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:19:04.382022 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:19:04.382040 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:19:04.382060 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:19:04.382083 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:19:04.382102 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:19:04.382125 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:19:04.382143 systemd[1]: Stopped verity-setup.service. May 27 03:19:04.382161 kernel: ACPI: bus type drm_connector registered May 27 03:19:04.382180 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:04.384248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:19:04.384285 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:19:04.384308 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:19:04.384330 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:19:04.384359 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:19:04.384382 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:19:04.384407 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:19:04.384427 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:19:04.384448 kernel: loop: module loaded May 27 03:19:04.384470 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:19:04.384492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:19:04.384514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:19:04.384537 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:19:04.384563 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:19:04.384586 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:19:04.384608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:19:04.384629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:19:04.384651 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:19:04.384674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:19:04.384696 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:19:04.384719 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:19:04.384741 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:19:04.384768 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:19:04.384791 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:19:04.384813 kernel: fuse: init (API version 7.41) May 27 03:19:04.384834 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:19:04.384856 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:19:04.384884 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:19:04.384951 systemd-journald[1448]: Collecting audit messages is disabled. May 27 03:19:04.384994 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:19:04.385018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:04.385040 systemd-journald[1448]: Journal started May 27 03:19:04.385082 systemd-journald[1448]: Runtime Journal (/run/log/journal/ec2416075c2fbd6af0fc8c2ab03f0c46) is 4.8M, max 38.4M, 33.6M free. May 27 03:19:03.910469 systemd[1]: Queued start job for default target multi-user.target. May 27 03:19:03.935741 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 03:19:03.936216 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:19:04.388243 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:19:04.407740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:19:04.407836 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:19:04.407866 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:19:04.415219 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:19:04.420220 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:19:04.430240 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:19:04.433226 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:19:04.436875 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:19:04.437873 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:19:04.439135 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:19:04.441127 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:19:04.444781 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:19:04.476915 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:19:04.497558 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:19:04.498840 kernel: loop0: detected capacity change from 0 to 221472 May 27 03:19:04.500107 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:19:04.507554 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:19:04.510015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:19:04.540979 systemd-journald[1448]: Time spent on flushing to /var/log/journal/ec2416075c2fbd6af0fc8c2ab03f0c46 is 39.141ms for 1018 entries. May 27 03:19:04.540979 systemd-journald[1448]: System Journal (/var/log/journal/ec2416075c2fbd6af0fc8c2ab03f0c46) is 8M, max 195.6M, 187.6M free. May 27 03:19:04.588686 systemd-journald[1448]: Received client request to flush runtime journal. May 27 03:19:04.588748 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:19:04.558644 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:19:04.592125 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:19:04.600163 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:19:04.607139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:19:04.618242 kernel: loop1: detected capacity change from 0 to 72352 May 27 03:19:04.649876 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. May 27 03:19:04.650723 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. May 27 03:19:04.663624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:19:04.738612 kernel: loop2: detected capacity change from 0 to 113872 May 27 03:19:04.842219 kernel: loop3: detected capacity change from 0 to 146240 May 27 03:19:04.917543 kernel: loop4: detected capacity change from 0 to 221472 May 27 03:19:04.938154 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:19:04.942311 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:19:04.962658 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:19:04.970564 kernel: loop5: detected capacity change from 0 to 72352 May 27 03:19:04.995261 kernel: loop6: detected capacity change from 0 to 113872 May 27 03:19:05.034232 kernel: loop7: detected capacity change from 0 to 146240 May 27 03:19:05.077040 (sd-merge)[1517]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 27 03:19:05.077600 (sd-merge)[1517]: Merged extensions into '/usr'. May 27 03:19:05.082842 systemd[1]: Reload requested from client PID 1476 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:19:05.082856 systemd[1]: Reloading... May 27 03:19:05.222129 zram_generator::config[1545]: No configuration found. May 27 03:19:05.405888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:19:05.427852 ldconfig[1472]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:19:05.559809 systemd[1]: Reloading finished in 476 ms. May 27 03:19:05.573856 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:19:05.578862 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:19:05.586723 systemd[1]: Starting ensure-sysext.service... May 27 03:19:05.590338 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:19:05.618168 systemd[1]: Reload requested from client PID 1597 ('systemctl') (unit ensure-sysext.service)... May 27 03:19:05.618320 systemd[1]: Reloading... May 27 03:19:05.625023 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:19:05.625056 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:19:05.627504 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:19:05.627766 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:19:05.628633 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:19:05.628885 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. May 27 03:19:05.628944 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. May 27 03:19:05.636656 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:19:05.636676 systemd-tmpfiles[1598]: Skipping /boot May 27 03:19:05.700750 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:19:05.702236 systemd-tmpfiles[1598]: Skipping /boot May 27 03:19:05.705220 zram_generator::config[1621]: No configuration found. May 27 03:19:05.842184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:19:05.934430 systemd[1]: Reloading finished in 315 ms. May 27 03:19:05.957527 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:19:05.968417 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:19:05.976810 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:19:05.980415 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:19:05.983431 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:19:05.989285 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:19:05.992467 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:19:05.996438 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:19:05.999512 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:05.999709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:06.001056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:19:06.006175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:19:06.008029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:19:06.008546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:06.008657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:06.008748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:06.011517 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:06.011711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:06.011861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:06.011948 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:06.012035 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:06.015735 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:06.015985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:06.021385 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:19:06.021998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:06.022120 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:06.023297 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:19:06.024725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:06.028390 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:19:06.036181 systemd[1]: Finished ensure-sysext.service. May 27 03:19:06.053790 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:19:06.062004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:19:06.063137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:19:06.069605 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:19:06.071265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:19:06.071975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:19:06.073819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:19:06.075168 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:19:06.076468 systemd-udevd[1683]: Using default interface naming scheme 'v255'. May 27 03:19:06.078908 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:19:06.079697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:19:06.082513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:19:06.082664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:19:06.084432 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:19:06.109074 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:19:06.126589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:19:06.129847 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:19:06.137233 augenrules[1719]: No rules May 27 03:19:06.140355 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:19:06.141500 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:19:06.142476 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:19:06.163124 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:19:06.164103 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:19:06.296263 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:19:06.307718 (udev-worker)[1745]: Network interface NamePolicy= disabled on kernel command line. May 27 03:19:06.476988 systemd-networkd[1721]: lo: Link UP May 27 03:19:06.477415 systemd-networkd[1721]: lo: Gained carrier May 27 03:19:06.481326 systemd-networkd[1721]: Enumeration completed May 27 03:19:06.481582 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:19:06.483681 systemd-networkd[1721]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:19:06.484355 systemd-networkd[1721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:19:06.487555 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:19:06.491570 systemd-networkd[1721]: eth0: Link UP May 27 03:19:06.491945 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:19:06.493560 systemd-networkd[1721]: eth0: Gained carrier May 27 03:19:06.494856 systemd-networkd[1721]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:19:06.511273 systemd-networkd[1721]: eth0: DHCPv4 address 172.31.29.86/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 03:19:06.542760 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:19:06.571645 systemd-resolved[1682]: Positive Trust Anchors: May 27 03:19:06.571670 systemd-resolved[1682]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:19:06.571725 systemd-resolved[1682]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:19:06.580588 systemd-resolved[1682]: Defaulting to hostname 'linux'. May 27 03:19:06.586220 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:19:06.584533 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:19:06.602000 systemd[1]: Reached target network.target - Network. May 27 03:19:06.603296 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:19:06.604588 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:19:06.605606 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:19:06.607335 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:19:06.607923 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:19:06.609484 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:19:06.610438 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:19:06.611524 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:19:06.613297 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:19:06.613343 systemd[1]: Reached target paths.target - Path Units. May 27 03:19:06.613894 systemd[1]: Reached target timers.target - Timer Units. May 27 03:19:06.617610 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:19:06.621139 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:19:06.630026 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:19:06.632525 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:19:06.633776 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:19:06.645551 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 03:19:06.644516 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:19:06.647356 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:19:06.651158 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:19:06.653227 kernel: ACPI: button: Power Button [PWRF] May 27 03:19:06.658225 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 May 27 03:19:06.664161 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:19:06.664841 systemd[1]: Reached target basic.target - Basic System. May 27 03:19:06.665525 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:19:06.665559 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:19:06.669329 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:19:06.677485 kernel: ACPI: button: Sleep Button [SLPF] May 27 03:19:06.674529 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:19:06.679444 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:19:06.683368 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:19:06.688403 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:19:06.699510 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:19:06.700117 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:19:06.706732 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:19:06.716831 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:19:06.722158 systemd[1]: Started ntpd.service - Network Time Service. May 27 03:19:06.745779 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:19:06.753378 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 03:19:06.757453 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:19:06.769520 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:19:06.780803 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:19:06.784660 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:19:06.787654 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:19:06.788945 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:19:06.803363 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:19:06.811292 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:19:06.816048 jq[1858]: false May 27 03:19:06.819585 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:19:06.820449 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:19:06.829943 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:19:06.830841 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:19:06.869306 update_engine[1871]: I20250527 03:19:06.867131 1871 main.cc:92] Flatcar Update Engine starting May 27 03:19:06.859332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 03:19:06.881032 oslogin_cache_refresh[1860]: Refreshing passwd entry cache May 27 03:19:06.881934 google_oslogin_nss_cache[1860]: oslogin_cache_refresh[1860]: Refreshing passwd entry cache May 27 03:19:06.888731 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:19:06.893720 (ntainerd)[1881]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:19:06.921821 oslogin_cache_refresh[1860]: Failure getting users, quitting May 27 03:19:06.923531 google_oslogin_nss_cache[1860]: oslogin_cache_refresh[1860]: Failure getting users, quitting May 27 03:19:06.923531 google_oslogin_nss_cache[1860]: oslogin_cache_refresh[1860]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:19:06.923531 google_oslogin_nss_cache[1860]: oslogin_cache_refresh[1860]: Refreshing group entry cache May 27 03:19:06.921845 oslogin_cache_refresh[1860]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:19:06.925893 google_oslogin_nss_cache[1860]: oslogin_cache_refresh[1860]: Failure getting groups, quitting May 27 03:19:06.925893 google_oslogin_nss_cache[1860]: oslogin_cache_refresh[1860]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:19:06.921906 oslogin_cache_refresh[1860]: Refreshing group entry cache May 27 03:19:06.924394 oslogin_cache_refresh[1860]: Failure getting groups, quitting May 27 03:19:06.924407 oslogin_cache_refresh[1860]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:19:06.932924 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:19:06.933827 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:19:06.953476 tar[1874]: linux-amd64/helm May 27 03:19:06.971623 jq[1872]: true May 27 03:19:06.986907 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 03:19:06.987956 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:19:06.988257 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:19:07.037398 extend-filesystems[1859]: Found loop4 May 27 03:19:07.044432 extend-filesystems[1859]: Found loop5 May 27 03:19:07.044432 extend-filesystems[1859]: Found loop6 May 27 03:19:07.044432 extend-filesystems[1859]: Found loop7 May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1 May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1p1 May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1p2 May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1p3 May 27 03:19:07.044432 extend-filesystems[1859]: Found usr May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1p4 May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1p6 May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1p7 May 27 03:19:07.044432 extend-filesystems[1859]: Found nvme0n1p9 May 27 03:19:07.044432 extend-filesystems[1859]: Checking size of /dev/nvme0n1p9 May 27 03:19:07.039865 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:37:40 UTC 2025 (1): Starting May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: ---------------------------------------------------- May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: corporation. Support and training for ntp-4 are May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: available at https://www.nwtime.org/support May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: ---------------------------------------------------- May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: proto: precision = 0.065 usec (-24) May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: basedate set to 2025-05-15 May 27 03:19:07.125555 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: gps base set to 2025-05-18 (week 2367) May 27 03:19:07.126138 jq[1902]: true May 27 03:19:07.075463 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:37:40 UTC 2025 (1): Starting May 27 03:19:07.084377 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Listen normally on 3 eth0 172.31.29.86:123 May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Listen normally on 4 lo [::1]:123 May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: bind(21) AF_INET6 fe80::4c8:a2ff:fe90:f48f%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: unable to create socket on eth0 (5) for fe80::4c8:a2ff:fe90:f48f%2#123 May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: failed to init interface for address fe80::4c8:a2ff:fe90:f48f%2 May 27 03:19:07.156144 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: Listening on routing socket on fd #21 for interface updates May 27 03:19:07.156552 extend-filesystems[1859]: Resized partition /dev/nvme0n1p9 May 27 03:19:07.162113 update_engine[1871]: I20250527 03:19:07.155514 1871 update_check_scheduler.cc:74] Next update check in 5m0s May 27 03:19:07.075489 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 03:19:07.168625 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 27 03:19:07.090721 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:19:07.168934 extend-filesystems[1931]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:19:07.075499 ntpd[1862]: ---------------------------------------------------- May 27 03:19:07.090756 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:19:07.075512 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, May 27 03:19:07.092409 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:19:07.075521 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 03:19:07.092435 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:19:07.075531 ntpd[1862]: corporation. Support and training for ntp-4 are May 27 03:19:07.130483 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 03:19:07.075540 ntpd[1862]: available at https://www.nwtime.org/support May 27 03:19:07.147820 systemd[1]: Started update-engine.service - Update Engine. May 27 03:19:07.075550 ntpd[1862]: ---------------------------------------------------- May 27 03:19:07.083396 ntpd[1862]: proto: precision = 0.065 usec (-24) May 27 03:19:07.084122 dbus-daemon[1856]: [system] SELinux support is enabled May 27 03:19:07.104452 ntpd[1862]: basedate set to 2025-05-15 May 27 03:19:07.183846 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:19:07.183846 ntpd[1862]: 27 May 03:19:07 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:19:07.104477 ntpd[1862]: gps base set to 2025-05-18 (week 2367) May 27 03:19:07.121855 dbus-daemon[1856]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1721 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 03:19:07.131109 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 May 27 03:19:07.131213 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 03:19:07.131426 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 May 27 03:19:07.131468 ntpd[1862]: Listen normally on 3 eth0 172.31.29.86:123 May 27 03:19:07.131512 ntpd[1862]: Listen normally on 4 lo [::1]:123 May 27 03:19:07.131566 ntpd[1862]: bind(21) AF_INET6 fe80::4c8:a2ff:fe90:f48f%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:19:07.131591 ntpd[1862]: unable to create socket on eth0 (5) for fe80::4c8:a2ff:fe90:f48f%2#123 May 27 03:19:07.131606 ntpd[1862]: failed to init interface for address fe80::4c8:a2ff:fe90:f48f%2 May 27 03:19:07.131641 ntpd[1862]: Listening on routing socket on fd #21 for interface updates May 27 03:19:07.175286 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:19:07.175323 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:19:07.232339 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:19:07.285954 coreos-metadata[1855]: May 27 03:19:07.285 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 03:19:07.293228 coreos-metadata[1855]: May 27 03:19:07.291 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 27 03:19:07.293648 coreos-metadata[1855]: May 27 03:19:07.293 INFO Fetch successful May 27 03:19:07.294310 coreos-metadata[1855]: May 27 03:19:07.294 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 27 03:19:07.297019 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 27 03:19:07.315346 coreos-metadata[1855]: May 27 03:19:07.299 INFO Fetch successful May 27 03:19:07.315346 coreos-metadata[1855]: May 27 03:19:07.299 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 27 03:19:07.315346 coreos-metadata[1855]: May 27 03:19:07.305 INFO Fetch successful May 27 03:19:07.315346 coreos-metadata[1855]: May 27 03:19:07.305 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 27 03:19:07.315346 coreos-metadata[1855]: May 27 03:19:07.312 INFO Fetch successful May 27 03:19:07.315346 coreos-metadata[1855]: May 27 03:19:07.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 27 03:19:07.315658 coreos-metadata[1855]: May 27 03:19:07.315 INFO Fetch failed with 404: resource not found May 27 03:19:07.315751 coreos-metadata[1855]: May 27 03:19:07.315 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 27 03:19:07.319038 extend-filesystems[1931]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 27 03:19:07.319038 extend-filesystems[1931]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:19:07.319038 extend-filesystems[1931]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 27 03:19:07.321598 extend-filesystems[1859]: Resized filesystem in /dev/nvme0n1p9 May 27 03:19:07.323638 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:19:07.323922 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:19:07.326146 bash[1944]: Updated "/home/core/.ssh/authorized_keys" May 27 03:19:07.329143 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:19:07.331798 coreos-metadata[1855]: May 27 03:19:07.331 INFO Fetch successful May 27 03:19:07.331798 coreos-metadata[1855]: May 27 03:19:07.331 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 27 03:19:07.338686 coreos-metadata[1855]: May 27 03:19:07.337 INFO Fetch successful May 27 03:19:07.338686 coreos-metadata[1855]: May 27 03:19:07.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 27 03:19:07.341373 coreos-metadata[1855]: May 27 03:19:07.341 INFO Fetch successful May 27 03:19:07.341493 coreos-metadata[1855]: May 27 03:19:07.341 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 27 03:19:07.341764 systemd[1]: Starting sshkeys.service... May 27 03:19:07.360127 coreos-metadata[1855]: May 27 03:19:07.359 INFO Fetch successful May 27 03:19:07.360127 coreos-metadata[1855]: May 27 03:19:07.359 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 27 03:19:07.362898 coreos-metadata[1855]: May 27 03:19:07.361 INFO Fetch successful May 27 03:19:07.389981 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 03:19:07.394602 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 03:19:07.409224 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 27 03:19:07.450182 systemd-logind[1869]: New seat seat0. May 27 03:19:07.453656 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:19:07.473396 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:19:07.474713 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:19:07.478363 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 03:19:07.492348 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 03:19:07.496346 dbus-daemon[1856]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1929 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 03:19:07.510701 systemd[1]: Starting polkit.service - Authorization Manager... May 27 03:19:07.544875 systemd-networkd[1721]: eth0: Gained IPv6LL May 27 03:19:07.575204 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:19:07.577730 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:19:07.586108 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 27 03:19:07.598081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:07.602748 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:19:07.772559 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:19:07.807275 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:07.838088 systemd-logind[1869]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:19:07.843381 coreos-metadata[1955]: May 27 03:19:07.843 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 03:19:07.843708 coreos-metadata[1955]: May 27 03:19:07.843 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 27 03:19:07.843708 coreos-metadata[1955]: May 27 03:19:07.843 INFO Fetch successful May 27 03:19:07.843708 coreos-metadata[1955]: May 27 03:19:07.843 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 03:19:07.843708 coreos-metadata[1955]: May 27 03:19:07.843 INFO Fetch successful May 27 03:19:07.845858 unknown[1955]: wrote ssh authorized keys file for user: core May 27 03:19:07.886834 amazon-ssm-agent[1963]: Initializing new seelog logger May 27 03:19:07.890042 amazon-ssm-agent[1963]: New Seelog Logger Creation Complete May 27 03:19:07.891236 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.891236 amazon-ssm-agent[1963]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.891236 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 processing appconfig overrides May 27 03:19:07.899854 amazon-ssm-agent[1963]: 2025-05-27 03:19:07.8994 INFO Proxy environment variables: May 27 03:19:07.900109 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.900109 amazon-ssm-agent[1963]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.905921 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 processing appconfig overrides May 27 03:19:07.911240 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.911240 amazon-ssm-agent[1963]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.913861 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 processing appconfig overrides May 27 03:19:07.934633 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.934633 amazon-ssm-agent[1963]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:07.934633 amazon-ssm-agent[1963]: 2025/05/27 03:19:07 processing appconfig overrides May 27 03:19:07.953392 update-ssh-keys[1991]: Updated "/home/core/.ssh/authorized_keys" May 27 03:19:07.948432 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 03:19:07.955024 systemd[1]: Finished sshkeys.service. May 27 03:19:07.961670 containerd[1881]: time="2025-05-27T03:19:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:19:07.962679 polkitd[1961]: Started polkitd version 126 May 27 03:19:07.963235 locksmithd[1935]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:19:07.970277 containerd[1881]: time="2025-05-27T03:19:07.966253021Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:19:07.974571 polkitd[1961]: Loading rules from directory /etc/polkit-1/rules.d May 27 03:19:07.975088 polkitd[1961]: Loading rules from directory /run/polkit-1/rules.d May 27 03:19:07.975147 polkitd[1961]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:19:07.975597 polkitd[1961]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 03:19:07.975622 polkitd[1961]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:19:07.975669 polkitd[1961]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 03:19:07.978310 systemd-logind[1869]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:19:07.988798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:07.996551 polkitd[1961]: Finished loading, compiling and executing 2 rules May 27 03:19:07.996900 systemd[1]: Started polkit.service - Authorization Manager. May 27 03:19:08.000900 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 03:19:08.001637 amazon-ssm-agent[1963]: 2025-05-27 03:19:07.8997 INFO https_proxy: May 27 03:19:08.002777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:19:08.003047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:08.005458 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:08.007540 polkitd[1961]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 03:19:08.009188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:08.013687 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:19:08.023563 systemd-logind[1869]: Watching system buttons on /dev/input/event3 (Sleep Button) May 27 03:19:08.070130 containerd[1881]: time="2025-05-27T03:19:08.070078702Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.811µs" May 27 03:19:08.070130 containerd[1881]: time="2025-05-27T03:19:08.070126037Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:19:08.070297 containerd[1881]: time="2025-05-27T03:19:08.070151408Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:19:08.070371 containerd[1881]: time="2025-05-27T03:19:08.070348972Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:19:08.070412 containerd[1881]: time="2025-05-27T03:19:08.070378277Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:19:08.070447 containerd[1881]: time="2025-05-27T03:19:08.070412972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:19:08.070507 containerd[1881]: time="2025-05-27T03:19:08.070487395Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:19:08.070549 containerd[1881]: time="2025-05-27T03:19:08.070508803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:19:08.070838 containerd[1881]: time="2025-05-27T03:19:08.070810189Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:19:08.070888 containerd[1881]: time="2025-05-27T03:19:08.070838614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:19:08.070888 containerd[1881]: time="2025-05-27T03:19:08.070857560Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:19:08.070888 containerd[1881]: time="2025-05-27T03:19:08.070871108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:19:08.071046 containerd[1881]: time="2025-05-27T03:19:08.070978099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:19:08.081721 containerd[1881]: time="2025-05-27T03:19:08.081672316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:19:08.081822 containerd[1881]: time="2025-05-27T03:19:08.081751185Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:19:08.081822 containerd[1881]: time="2025-05-27T03:19:08.081769793Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:19:08.081896 containerd[1881]: time="2025-05-27T03:19:08.081829716Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:19:08.083937 containerd[1881]: time="2025-05-27T03:19:08.083834804Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:19:08.084028 containerd[1881]: time="2025-05-27T03:19:08.083965410Z" level=info msg="metadata content store policy set" policy=shared May 27 03:19:08.084523 systemd-hostnamed[1929]: Hostname set to (transient) May 27 03:19:08.084917 systemd-resolved[1682]: System hostname changed to 'ip-172-31-29-86'. May 27 03:19:08.088135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:08.095173 containerd[1881]: time="2025-05-27T03:19:08.095001647Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:19:08.095173 containerd[1881]: time="2025-05-27T03:19:08.095077432Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:19:08.095173 containerd[1881]: time="2025-05-27T03:19:08.095106397Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:19:08.095173 containerd[1881]: time="2025-05-27T03:19:08.095135619Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:19:08.095173 containerd[1881]: time="2025-05-27T03:19:08.095156008Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:19:08.095173 containerd[1881]: time="2025-05-27T03:19:08.095171712Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095187040Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095216219Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095232118Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095248272Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095261298Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095278252Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095417338Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095440898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095461731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095477450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095492962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:19:08.095507 containerd[1881]: time="2025-05-27T03:19:08.095509709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095527673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095542922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095561605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095576690Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095592179Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095678867Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095696861Z" level=info msg="Start snapshots syncer" May 27 03:19:08.095875 containerd[1881]: time="2025-05-27T03:19:08.095722454Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:19:08.096142 containerd[1881]: time="2025-05-27T03:19:08.096027334Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:19:08.096142 containerd[1881]: time="2025-05-27T03:19:08.096087558Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096190577Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096351255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096383136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096398322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096412725Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096429330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096450081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096466817Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096498483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096512367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096528025Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096563712Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096584122Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:19:08.097175 containerd[1881]: time="2025-05-27T03:19:08.096596935Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096612237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096624210Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096639165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096654540Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096675412Z" level=info msg="runtime interface created" May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096682979Z" level=info msg="created NRI interface" May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096694953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096713067Z" level=info msg="Connect containerd service" May 27 03:19:08.097744 containerd[1881]: time="2025-05-27T03:19:08.096754226Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:19:08.100274 containerd[1881]: time="2025-05-27T03:19:08.099871596Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:19:08.101139 amazon-ssm-agent[1963]: 2025-05-27 03:19:07.8997 INFO http_proxy: May 27 03:19:08.213293 amazon-ssm-agent[1963]: 2025-05-27 03:19:07.8997 INFO no_proxy: May 27 03:19:08.313633 amazon-ssm-agent[1963]: 2025-05-27 03:19:07.9059 INFO Checking if agent identity type OnPrem can be assumed May 27 03:19:08.418536 sshd_keygen[1910]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:19:08.418746 amazon-ssm-agent[1963]: 2025-05-27 03:19:07.9061 INFO Checking if agent identity type EC2 can be assumed May 27 03:19:08.498036 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:19:08.511646 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2599 INFO Agent will take identity from EC2 May 27 03:19:08.512809 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:19:08.588689 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:19:08.588996 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:19:08.594924 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:19:08.611651 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2618 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638625826Z" level=info msg="Start subscribing containerd event" May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638681600Z" level=info msg="Start recovering state" May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638820913Z" level=info msg="Start event monitor" May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638838194Z" level=info msg="Start cni network conf syncer for default" May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638848828Z" level=info msg="Start streaming server" May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638865049Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638874643Z" level=info msg="runtime interface starting up..." May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638883103Z" level=info msg="starting plugins..." May 27 03:19:08.639002 containerd[1881]: time="2025-05-27T03:19:08.638899283Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:19:08.651152 containerd[1881]: time="2025-05-27T03:19:08.646361312Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:19:08.651152 containerd[1881]: time="2025-05-27T03:19:08.646563636Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:19:08.652624 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:19:08.653483 containerd[1881]: time="2025-05-27T03:19:08.653433834Z" level=info msg="containerd successfully booted in 0.699925s" May 27 03:19:08.693811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:19:08.697483 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:19:08.705670 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:19:08.711283 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2622 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 27 03:19:08.712159 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:19:08.713113 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:19:08.718694 systemd[1]: Started sshd@0-172.31.29.86:22-139.178.68.195:52054.service - OpenSSH per-connection server daemon (139.178.68.195:52054). May 27 03:19:08.811302 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2622 INFO [amazon-ssm-agent] Starting Core Agent May 27 03:19:08.913099 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2622 INFO [amazon-ssm-agent] Registrar detected. Attempting registration May 27 03:19:09.008233 sshd[2142]: Accepted publickey for core from 139.178.68.195 port 52054 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:09.012116 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2622 INFO [Registrar] Starting registrar module May 27 03:19:09.012859 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:09.026337 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:19:09.029538 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:19:09.074640 systemd-logind[1869]: New session 1 of user core. May 27 03:19:09.087872 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:19:09.095144 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:19:09.112688 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2635 INFO [EC2Identity] Checking disk for registration info May 27 03:19:09.117146 (systemd)[2146]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:19:09.126331 systemd-logind[1869]: New session c1 of user core. May 27 03:19:09.214342 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2636 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration May 27 03:19:09.267350 tar[1874]: linux-amd64/LICENSE May 27 03:19:09.267742 tar[1874]: linux-amd64/README.md May 27 03:19:09.304917 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:19:09.317218 amazon-ssm-agent[1963]: 2025-05-27 03:19:08.2636 INFO [EC2Identity] Generating registration keypair May 27 03:19:09.416343 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4145 INFO [EC2Identity] Checking write access before registering May 27 03:19:09.421310 systemd[2146]: Queued start job for default target default.target. May 27 03:19:09.425966 systemd[2146]: Created slice app.slice - User Application Slice. May 27 03:19:09.426001 systemd[2146]: Reached target paths.target - Paths. May 27 03:19:09.426047 systemd[2146]: Reached target timers.target - Timers. May 27 03:19:09.429662 systemd[2146]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:19:09.456128 systemd[2146]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:19:09.457113 systemd[2146]: Reached target sockets.target - Sockets. May 27 03:19:09.457683 systemd[2146]: Reached target basic.target - Basic System. May 27 03:19:09.457773 systemd[2146]: Reached target default.target - Main User Target. May 27 03:19:09.457813 systemd[2146]: Startup finished in 313ms. May 27 03:19:09.457986 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:19:09.462318 amazon-ssm-agent[1963]: 2025/05/27 03:19:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:09.462444 amazon-ssm-agent[1963]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:19:09.462498 amazon-ssm-agent[1963]: 2025/05/27 03:19:09 processing appconfig overrides May 27 03:19:09.463397 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:19:09.494440 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4150 INFO [EC2Identity] Registering EC2 instance with Systems Manager May 27 03:19:09.494440 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4620 INFO [EC2Identity] EC2 registration was successful. May 27 03:19:09.494440 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4621 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. May 27 03:19:09.494440 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4622 INFO [CredentialRefresher] credentialRefresher has started May 27 03:19:09.494440 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4622 INFO [CredentialRefresher] Starting credentials refresher loop May 27 03:19:09.494440 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4940 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 27 03:19:09.494440 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4942 INFO [CredentialRefresher] Credentials ready May 27 03:19:09.516608 amazon-ssm-agent[1963]: 2025-05-27 03:19:09.4942 INFO [CredentialRefresher] Next credential rotation will be in 29.999995259866665 minutes May 27 03:19:09.618417 systemd[1]: Started sshd@1-172.31.29.86:22-139.178.68.195:52070.service - OpenSSH per-connection server daemon (139.178.68.195:52070). May 27 03:19:09.790125 sshd[2160]: Accepted publickey for core from 139.178.68.195 port 52070 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:09.791473 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:09.797371 systemd-logind[1869]: New session 2 of user core. May 27 03:19:09.805382 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:19:09.922658 sshd[2162]: Connection closed by 139.178.68.195 port 52070 May 27 03:19:09.923382 sshd-session[2160]: pam_unix(sshd:session): session closed for user core May 27 03:19:09.927004 systemd[1]: sshd@1-172.31.29.86:22-139.178.68.195:52070.service: Deactivated successfully. May 27 03:19:09.930357 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:19:09.931413 systemd-logind[1869]: Session 2 logged out. Waiting for processes to exit. May 27 03:19:09.932634 systemd-logind[1869]: Removed session 2. May 27 03:19:09.953306 systemd[1]: Started sshd@2-172.31.29.86:22-139.178.68.195:52082.service - OpenSSH per-connection server daemon (139.178.68.195:52082). May 27 03:19:10.075955 ntpd[1862]: Listen normally on 6 eth0 [fe80::4c8:a2ff:fe90:f48f%2]:123 May 27 03:19:10.076636 ntpd[1862]: 27 May 03:19:10 ntpd[1862]: Listen normally on 6 eth0 [fe80::4c8:a2ff:fe90:f48f%2]:123 May 27 03:19:10.107596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:10.109605 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:19:10.112556 systemd[1]: Startup finished in 2.752s (kernel) + 8.489s (initrd) + 7.004s (userspace) = 18.246s. May 27 03:19:10.124010 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:19:10.131975 sshd[2168]: Accepted publickey for core from 139.178.68.195 port 52082 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:10.134271 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:10.148066 systemd-logind[1869]: New session 3 of user core. May 27 03:19:10.151516 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:19:10.269696 sshd[2180]: Connection closed by 139.178.68.195 port 52082 May 27 03:19:10.271366 sshd-session[2168]: pam_unix(sshd:session): session closed for user core May 27 03:19:10.275644 systemd[1]: sshd@2-172.31.29.86:22-139.178.68.195:52082.service: Deactivated successfully. May 27 03:19:10.275759 systemd-logind[1869]: Session 3 logged out. Waiting for processes to exit. May 27 03:19:10.278934 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:19:10.282256 systemd-logind[1869]: Removed session 3. May 27 03:19:10.508997 amazon-ssm-agent[1963]: 2025-05-27 03:19:10.5086 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 27 03:19:10.610707 amazon-ssm-agent[1963]: 2025-05-27 03:19:10.5112 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2191) started May 27 03:19:10.711469 amazon-ssm-agent[1963]: 2025-05-27 03:19:10.5112 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 27 03:19:10.937333 kubelet[2175]: E0527 03:19:10.937103 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:19:10.940043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:19:10.940250 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:19:10.940841 systemd[1]: kubelet.service: Consumed 1.070s CPU time, 267M memory peak. May 27 03:19:15.191944 systemd-resolved[1682]: Clock change detected. Flushing caches. May 27 03:19:21.419944 systemd[1]: Started sshd@3-172.31.29.86:22-139.178.68.195:56278.service - OpenSSH per-connection server daemon (139.178.68.195:56278). May 27 03:19:21.589490 sshd[2208]: Accepted publickey for core from 139.178.68.195 port 56278 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:21.590871 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:21.597141 systemd-logind[1869]: New session 4 of user core. May 27 03:19:21.605305 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:19:21.724171 sshd[2210]: Connection closed by 139.178.68.195 port 56278 May 27 03:19:21.724938 sshd-session[2208]: pam_unix(sshd:session): session closed for user core May 27 03:19:21.729173 systemd[1]: sshd@3-172.31.29.86:22-139.178.68.195:56278.service: Deactivated successfully. May 27 03:19:21.729178 systemd-logind[1869]: Session 4 logged out. Waiting for processes to exit. May 27 03:19:21.731332 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:19:21.732975 systemd-logind[1869]: Removed session 4. May 27 03:19:21.757041 systemd[1]: Started sshd@4-172.31.29.86:22-139.178.68.195:56286.service - OpenSSH per-connection server daemon (139.178.68.195:56286). May 27 03:19:21.923986 sshd[2216]: Accepted publickey for core from 139.178.68.195 port 56286 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:21.925294 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:21.930677 systemd-logind[1869]: New session 5 of user core. May 27 03:19:21.933228 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:19:22.054320 sshd[2218]: Connection closed by 139.178.68.195 port 56286 May 27 03:19:22.054842 sshd-session[2216]: pam_unix(sshd:session): session closed for user core May 27 03:19:22.058408 systemd[1]: sshd@4-172.31.29.86:22-139.178.68.195:56286.service: Deactivated successfully. May 27 03:19:22.060033 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:19:22.061145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:19:22.061784 systemd-logind[1869]: Session 5 logged out. Waiting for processes to exit. May 27 03:19:22.064235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:22.064985 systemd-logind[1869]: Removed session 5. May 27 03:19:22.087393 systemd[1]: Started sshd@5-172.31.29.86:22-139.178.68.195:56296.service - OpenSSH per-connection server daemon (139.178.68.195:56296). May 27 03:19:22.255095 sshd[2227]: Accepted publickey for core from 139.178.68.195 port 56296 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:22.255910 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:22.264154 systemd-logind[1869]: New session 6 of user core. May 27 03:19:22.270353 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:19:22.325722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:22.341605 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:19:22.387019 kubelet[2235]: E0527 03:19:22.386845 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:19:22.390664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:19:22.390863 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:19:22.391572 systemd[1]: kubelet.service: Consumed 179ms CPU time, 109.6M memory peak. May 27 03:19:22.392717 sshd[2229]: Connection closed by 139.178.68.195 port 56296 May 27 03:19:22.393307 sshd-session[2227]: pam_unix(sshd:session): session closed for user core May 27 03:19:22.397121 systemd[1]: sshd@5-172.31.29.86:22-139.178.68.195:56296.service: Deactivated successfully. May 27 03:19:22.399184 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:19:22.401596 systemd-logind[1869]: Session 6 logged out. Waiting for processes to exit. May 27 03:19:22.402702 systemd-logind[1869]: Removed session 6. May 27 03:19:22.424706 systemd[1]: Started sshd@6-172.31.29.86:22-139.178.68.195:56302.service - OpenSSH per-connection server daemon (139.178.68.195:56302). May 27 03:19:22.598256 sshd[2247]: Accepted publickey for core from 139.178.68.195 port 56302 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:22.600100 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:22.606090 systemd-logind[1869]: New session 7 of user core. May 27 03:19:22.612312 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:19:22.722286 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:19:22.722564 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:22.736803 sudo[2250]: pam_unix(sudo:session): session closed for user root May 27 03:19:22.759195 sshd[2249]: Connection closed by 139.178.68.195 port 56302 May 27 03:19:22.760187 sshd-session[2247]: pam_unix(sshd:session): session closed for user core May 27 03:19:22.764897 systemd[1]: sshd@6-172.31.29.86:22-139.178.68.195:56302.service: Deactivated successfully. May 27 03:19:22.766760 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:19:22.767919 systemd-logind[1869]: Session 7 logged out. Waiting for processes to exit. May 27 03:19:22.769537 systemd-logind[1869]: Removed session 7. May 27 03:19:22.797122 systemd[1]: Started sshd@7-172.31.29.86:22-139.178.68.195:56316.service - OpenSSH per-connection server daemon (139.178.68.195:56316). May 27 03:19:22.964346 sshd[2256]: Accepted publickey for core from 139.178.68.195 port 56316 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:22.965767 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:22.970463 systemd-logind[1869]: New session 8 of user core. May 27 03:19:22.981373 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:19:23.077422 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:19:23.077784 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:23.082990 sudo[2260]: pam_unix(sudo:session): session closed for user root May 27 03:19:23.088468 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:19:23.088826 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:23.099496 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:19:23.139221 augenrules[2282]: No rules May 27 03:19:23.139925 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:19:23.140277 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:19:23.141481 sudo[2259]: pam_unix(sudo:session): session closed for user root May 27 03:19:23.163794 sshd[2258]: Connection closed by 139.178.68.195 port 56316 May 27 03:19:23.164357 sshd-session[2256]: pam_unix(sshd:session): session closed for user core May 27 03:19:23.167694 systemd[1]: sshd@7-172.31.29.86:22-139.178.68.195:56316.service: Deactivated successfully. May 27 03:19:23.169273 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:19:23.170501 systemd-logind[1869]: Session 8 logged out. Waiting for processes to exit. May 27 03:19:23.172656 systemd-logind[1869]: Removed session 8. May 27 03:19:23.207909 systemd[1]: Started sshd@8-172.31.29.86:22-139.178.68.195:56318.service - OpenSSH per-connection server daemon (139.178.68.195:56318). May 27 03:19:23.383624 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 56318 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:19:23.384833 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:23.390319 systemd-logind[1869]: New session 9 of user core. May 27 03:19:23.395274 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:19:23.496094 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:19:23.496468 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:23.907915 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:19:23.918635 (dockerd)[2311]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:19:24.190442 dockerd[2311]: time="2025-05-27T03:19:24.190301944Z" level=info msg="Starting up" May 27 03:19:24.191775 dockerd[2311]: time="2025-05-27T03:19:24.191337274Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:19:24.258090 dockerd[2311]: time="2025-05-27T03:19:24.258037129Z" level=info msg="Loading containers: start." May 27 03:19:24.270107 kernel: Initializing XFRM netlink socket May 27 03:19:24.486703 (udev-worker)[2333]: Network interface NamePolicy= disabled on kernel command line. May 27 03:19:24.541182 systemd-networkd[1721]: docker0: Link UP May 27 03:19:24.546225 dockerd[2311]: time="2025-05-27T03:19:24.546152834Z" level=info msg="Loading containers: done." May 27 03:19:24.563952 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3092854756-merged.mount: Deactivated successfully. May 27 03:19:24.569762 dockerd[2311]: time="2025-05-27T03:19:24.569714714Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:19:24.569992 dockerd[2311]: time="2025-05-27T03:19:24.569812380Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:19:24.569992 dockerd[2311]: time="2025-05-27T03:19:24.569951211Z" level=info msg="Initializing buildkit" May 27 03:19:24.601891 dockerd[2311]: time="2025-05-27T03:19:24.601772690Z" level=info msg="Completed buildkit initialization" May 27 03:19:24.613129 dockerd[2311]: time="2025-05-27T03:19:24.612112430Z" level=info msg="Daemon has completed initialization" May 27 03:19:24.613129 dockerd[2311]: time="2025-05-27T03:19:24.612321448Z" level=info msg="API listen on /run/docker.sock" May 27 03:19:24.612415 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:19:25.195979 systemd[1]: Started sshd@9-172.31.29.86:22-46.235.84.183:43966.service - OpenSSH per-connection server daemon (46.235.84.183:43966). May 27 03:19:25.602800 containerd[1881]: time="2025-05-27T03:19:25.602717997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 03:19:26.161608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335283581.mount: Deactivated successfully. May 27 03:19:28.245190 containerd[1881]: time="2025-05-27T03:19:28.245134627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:28.246287 containerd[1881]: time="2025-05-27T03:19:28.246051204Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 27 03:19:28.247230 containerd[1881]: time="2025-05-27T03:19:28.247200329Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:28.249749 containerd[1881]: time="2025-05-27T03:19:28.249714404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:28.250601 containerd[1881]: time="2025-05-27T03:19:28.250567517Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.647812431s" May 27 03:19:28.250681 containerd[1881]: time="2025-05-27T03:19:28.250606358Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 27 03:19:28.251147 containerd[1881]: time="2025-05-27T03:19:28.251126244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 03:19:30.267656 containerd[1881]: time="2025-05-27T03:19:30.267603971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:30.269624 containerd[1881]: time="2025-05-27T03:19:30.269363083Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 27 03:19:30.270661 containerd[1881]: time="2025-05-27T03:19:30.270624598Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:30.274657 containerd[1881]: time="2025-05-27T03:19:30.274608254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:30.275768 containerd[1881]: time="2025-05-27T03:19:30.275738822Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 2.024582943s" May 27 03:19:30.275877 containerd[1881]: time="2025-05-27T03:19:30.275863535Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 27 03:19:30.276894 containerd[1881]: time="2025-05-27T03:19:30.276868280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 03:19:31.850531 containerd[1881]: time="2025-05-27T03:19:31.850475988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:31.851690 containerd[1881]: time="2025-05-27T03:19:31.851535368Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 27 03:19:31.852601 containerd[1881]: time="2025-05-27T03:19:31.852568929Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:31.855477 containerd[1881]: time="2025-05-27T03:19:31.855439840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:31.856758 containerd[1881]: time="2025-05-27T03:19:31.856351765Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.579450026s" May 27 03:19:31.856758 containerd[1881]: time="2025-05-27T03:19:31.856394312Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 27 03:19:31.856935 containerd[1881]: time="2025-05-27T03:19:31.856911145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 03:19:32.529620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:19:32.532778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:32.832245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:32.843696 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:19:32.915587 kubelet[2592]: E0527 03:19:32.915530 2592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:19:32.917863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:19:32.918052 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:19:32.918699 systemd[1]: kubelet.service: Consumed 221ms CPU time, 110.7M memory peak. May 27 03:19:33.121707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198978682.mount: Deactivated successfully. May 27 03:19:33.686966 containerd[1881]: time="2025-05-27T03:19:33.686908408Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 27 03:19:33.688115 containerd[1881]: time="2025-05-27T03:19:33.687122165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:33.688824 containerd[1881]: time="2025-05-27T03:19:33.688779101Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:33.691284 containerd[1881]: time="2025-05-27T03:19:33.691219885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:33.691799 containerd[1881]: time="2025-05-27T03:19:33.691768616Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.834823593s" May 27 03:19:33.691799 containerd[1881]: time="2025-05-27T03:19:33.691799714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 27 03:19:33.692431 containerd[1881]: time="2025-05-27T03:19:33.692392187Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:19:34.191711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839605672.mount: Deactivated successfully. May 27 03:19:35.141453 containerd[1881]: time="2025-05-27T03:19:35.141395959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:35.148529 containerd[1881]: time="2025-05-27T03:19:35.148482903Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 03:19:35.158720 containerd[1881]: time="2025-05-27T03:19:35.158648201Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:35.169606 containerd[1881]: time="2025-05-27T03:19:35.169501509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:35.171362 containerd[1881]: time="2025-05-27T03:19:35.171041409Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.47860108s" May 27 03:19:35.171464 containerd[1881]: time="2025-05-27T03:19:35.171389532Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:19:35.171866 containerd[1881]: time="2025-05-27T03:19:35.171840899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:19:35.623163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564391887.mount: Deactivated successfully. May 27 03:19:35.629341 containerd[1881]: time="2025-05-27T03:19:35.629285173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:19:35.630374 containerd[1881]: time="2025-05-27T03:19:35.630197429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:19:35.631505 containerd[1881]: time="2025-05-27T03:19:35.631470675Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:19:35.635810 containerd[1881]: time="2025-05-27T03:19:35.635038769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:19:35.635810 containerd[1881]: time="2025-05-27T03:19:35.635674660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.805113ms" May 27 03:19:35.635810 containerd[1881]: time="2025-05-27T03:19:35.635708306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:19:35.636531 containerd[1881]: time="2025-05-27T03:19:35.636500886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 03:19:36.167923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233948038.mount: Deactivated successfully. May 27 03:19:38.578054 containerd[1881]: time="2025-05-27T03:19:38.577984873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:38.579326 containerd[1881]: time="2025-05-27T03:19:38.579287703Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 27 03:19:38.580310 containerd[1881]: time="2025-05-27T03:19:38.580240920Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:38.583114 containerd[1881]: time="2025-05-27T03:19:38.582787297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:38.584318 containerd[1881]: time="2025-05-27T03:19:38.583981261Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.947436119s" May 27 03:19:38.584318 containerd[1881]: time="2025-05-27T03:19:38.584020899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 27 03:19:39.210959 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 03:19:41.107935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:41.108224 systemd[1]: kubelet.service: Consumed 221ms CPU time, 110.7M memory peak. May 27 03:19:41.111025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:41.150137 systemd[1]: Reload requested from client PID 2744 ('systemctl') (unit session-9.scope)... May 27 03:19:41.150301 systemd[1]: Reloading... May 27 03:19:41.269107 zram_generator::config[2787]: No configuration found. May 27 03:19:41.397216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:19:41.542436 systemd[1]: Reloading finished in 391 ms. May 27 03:19:41.584092 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:19:41.584189 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:19:41.584482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:41.587873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:42.089908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:42.101496 (kubelet)[2850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:19:42.157431 kubelet[2850]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:19:42.157431 kubelet[2850]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 03:19:42.157431 kubelet[2850]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:19:42.162868 kubelet[2850]: I0527 03:19:42.162459 2850 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:19:42.529179 kubelet[2850]: I0527 03:19:42.529049 2850 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 03:19:42.529179 kubelet[2850]: I0527 03:19:42.529108 2850 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:19:42.531092 kubelet[2850]: I0527 03:19:42.529485 2850 server.go:934] "Client rotation is on, will bootstrap in background" May 27 03:19:42.588485 kubelet[2850]: I0527 03:19:42.588423 2850 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:19:42.591331 kubelet[2850]: E0527 03:19:42.591230 2850 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:42.614534 kubelet[2850]: I0527 03:19:42.614408 2850 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:19:42.625559 kubelet[2850]: I0527 03:19:42.625515 2850 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:19:42.628040 kubelet[2850]: I0527 03:19:42.627976 2850 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 03:19:42.628328 kubelet[2850]: I0527 03:19:42.628286 2850 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:19:42.628553 kubelet[2850]: I0527 03:19:42.628323 2850 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-86","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:19:42.628710 kubelet[2850]: I0527 03:19:42.628562 2850 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:19:42.628710 kubelet[2850]: I0527 03:19:42.628578 2850 container_manager_linux.go:300] "Creating device plugin manager" May 27 03:19:42.629658 kubelet[2850]: I0527 03:19:42.629623 2850 state_mem.go:36] "Initialized new in-memory state store" May 27 03:19:42.635145 kubelet[2850]: I0527 03:19:42.634910 2850 kubelet.go:408] "Attempting to sync node with API server" May 27 03:19:42.635145 kubelet[2850]: I0527 03:19:42.634963 2850 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:19:42.635145 kubelet[2850]: I0527 03:19:42.635012 2850 kubelet.go:314] "Adding apiserver pod source" May 27 03:19:42.635145 kubelet[2850]: I0527 03:19:42.635038 2850 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:19:42.640027 kubelet[2850]: W0527 03:19:42.639100 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-86&limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:42.640027 kubelet[2850]: E0527 03:19:42.639200 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-86&limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:42.640027 kubelet[2850]: W0527 03:19:42.639831 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:42.640027 kubelet[2850]: E0527 03:19:42.639887 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:42.640814 kubelet[2850]: I0527 03:19:42.640792 2850 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:19:42.645418 kubelet[2850]: I0527 03:19:42.645301 2850 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:19:42.646308 kubelet[2850]: W0527 03:19:42.646273 2850 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:19:42.647094 kubelet[2850]: I0527 03:19:42.646949 2850 server.go:1274] "Started kubelet" May 27 03:19:42.647313 kubelet[2850]: I0527 03:19:42.647265 2850 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:19:42.650868 kubelet[2850]: I0527 03:19:42.650826 2850 server.go:449] "Adding debug handlers to kubelet server" May 27 03:19:42.656894 kubelet[2850]: I0527 03:19:42.656355 2850 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:19:42.656894 kubelet[2850]: I0527 03:19:42.656646 2850 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:19:42.658157 kubelet[2850]: I0527 03:19:42.658045 2850 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:19:42.661185 kubelet[2850]: I0527 03:19:42.661156 2850 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:19:42.664697 kubelet[2850]: I0527 03:19:42.664304 2850 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 03:19:42.664697 kubelet[2850]: E0527 03:19:42.664580 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:42.667658 kubelet[2850]: E0527 03:19:42.661932 2850 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.86:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.86:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-86.1843441f8ccedd0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-86,UID:ip-172-31-29-86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-86,},FirstTimestamp:2025-05-27 03:19:42.646926607 +0000 UTC m=+0.540998897,LastTimestamp:2025-05-27 03:19:42.646926607 +0000 UTC m=+0.540998897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-86,}" May 27 03:19:42.668335 kubelet[2850]: I0527 03:19:42.668262 2850 factory.go:221] Registration of the systemd container factory successfully May 27 03:19:42.668553 kubelet[2850]: I0527 03:19:42.668534 2850 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:19:42.671787 kubelet[2850]: I0527 03:19:42.671527 2850 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 03:19:42.671787 kubelet[2850]: I0527 03:19:42.671588 2850 reconciler.go:26] "Reconciler: start to sync state" May 27 03:19:42.674415 kubelet[2850]: E0527 03:19:42.674360 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-86?timeout=10s\": dial tcp 172.31.29.86:6443: connect: connection refused" interval="200ms" May 27 03:19:42.677950 kubelet[2850]: W0527 03:19:42.676555 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:42.677950 kubelet[2850]: E0527 03:19:42.676605 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:42.677950 kubelet[2850]: I0527 03:19:42.676800 2850 factory.go:221] Registration of the containerd container factory successfully May 27 03:19:42.694871 kubelet[2850]: I0527 03:19:42.694813 2850 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:19:42.698456 kubelet[2850]: I0527 03:19:42.698415 2850 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:19:42.698456 kubelet[2850]: I0527 03:19:42.698445 2850 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 03:19:42.698626 kubelet[2850]: I0527 03:19:42.698481 2850 kubelet.go:2321] "Starting kubelet main sync loop" May 27 03:19:42.698626 kubelet[2850]: E0527 03:19:42.698532 2850 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:19:42.706007 kubelet[2850]: W0527 03:19:42.705954 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:42.706242 kubelet[2850]: E0527 03:19:42.706014 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:42.717033 kubelet[2850]: I0527 03:19:42.717001 2850 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 03:19:42.717033 kubelet[2850]: I0527 03:19:42.717021 2850 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 03:19:42.717033 kubelet[2850]: I0527 03:19:42.717041 2850 state_mem.go:36] "Initialized new in-memory state store" May 27 03:19:42.719822 kubelet[2850]: I0527 03:19:42.719784 2850 policy_none.go:49] "None policy: Start" May 27 03:19:42.720647 kubelet[2850]: I0527 03:19:42.720619 2850 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 03:19:42.720647 kubelet[2850]: I0527 03:19:42.720640 2850 state_mem.go:35] "Initializing new in-memory state store" May 27 03:19:42.736133 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:19:42.748403 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:19:42.752712 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:19:42.763440 kubelet[2850]: I0527 03:19:42.763358 2850 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:19:42.763613 kubelet[2850]: I0527 03:19:42.763595 2850 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:19:42.763676 kubelet[2850]: I0527 03:19:42.763616 2850 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:19:42.764087 kubelet[2850]: I0527 03:19:42.764053 2850 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:19:42.765587 kubelet[2850]: E0527 03:19:42.765562 2850 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-86\" not found" May 27 03:19:42.811807 systemd[1]: Created slice kubepods-burstable-pod0279014dfd4e43909e4f6475cec47c4e.slice - libcontainer container kubepods-burstable-pod0279014dfd4e43909e4f6475cec47c4e.slice. May 27 03:19:42.827437 systemd[1]: Created slice kubepods-burstable-pod5fd2d34b85449565c7c50bb4661f95e3.slice - libcontainer container kubepods-burstable-pod5fd2d34b85449565c7c50bb4661f95e3.slice. May 27 03:19:42.832823 systemd[1]: Created slice kubepods-burstable-pod08019f8c2eac197d4411f4d2d8a99c62.slice - libcontainer container kubepods-burstable-pod08019f8c2eac197d4411f4d2d8a99c62.slice. May 27 03:19:42.865739 kubelet[2850]: I0527 03:19:42.865708 2850 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-86" May 27 03:19:42.866139 kubelet[2850]: E0527 03:19:42.866106 2850 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.86:6443/api/v1/nodes\": dial tcp 172.31.29.86:6443: connect: connection refused" node="ip-172-31-29-86" May 27 03:19:42.875876 kubelet[2850]: E0527 03:19:42.875818 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-86?timeout=10s\": dial tcp 172.31.29.86:6443: connect: connection refused" interval="400ms" May 27 03:19:42.891491 kubelet[2850]: E0527 03:19:42.891394 2850 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.86:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.86:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-86.1843441f8ccedd0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-86,UID:ip-172-31-29-86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-86,},FirstTimestamp:2025-05-27 03:19:42.646926607 +0000 UTC m=+0.540998897,LastTimestamp:2025-05-27 03:19:42.646926607 +0000 UTC m=+0.540998897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-86,}" May 27 03:19:42.972950 kubelet[2850]: I0527 03:19:42.972897 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0279014dfd4e43909e4f6475cec47c4e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-86\" (UID: \"0279014dfd4e43909e4f6475cec47c4e\") " pod="kube-system/kube-apiserver-ip-172-31-29-86" May 27 03:19:42.973129 kubelet[2850]: I0527 03:19:42.972960 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:42.973129 kubelet[2850]: I0527 03:19:42.972988 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:42.973129 kubelet[2850]: I0527 03:19:42.973052 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0279014dfd4e43909e4f6475cec47c4e-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-86\" (UID: \"0279014dfd4e43909e4f6475cec47c4e\") " pod="kube-system/kube-apiserver-ip-172-31-29-86" May 27 03:19:42.973129 kubelet[2850]: I0527 03:19:42.973091 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:42.973129 kubelet[2850]: I0527 03:19:42.973116 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:42.973338 kubelet[2850]: I0527 03:19:42.973139 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:42.973338 kubelet[2850]: I0527 03:19:42.973163 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08019f8c2eac197d4411f4d2d8a99c62-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-86\" (UID: \"08019f8c2eac197d4411f4d2d8a99c62\") " pod="kube-system/kube-scheduler-ip-172-31-29-86" May 27 03:19:42.973338 kubelet[2850]: I0527 03:19:42.973185 2850 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0279014dfd4e43909e4f6475cec47c4e-ca-certs\") pod \"kube-apiserver-ip-172-31-29-86\" (UID: \"0279014dfd4e43909e4f6475cec47c4e\") " pod="kube-system/kube-apiserver-ip-172-31-29-86" May 27 03:19:43.068939 kubelet[2850]: I0527 03:19:43.068732 2850 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-86" May 27 03:19:43.069355 kubelet[2850]: E0527 03:19:43.069320 2850 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.86:6443/api/v1/nodes\": dial tcp 172.31.29.86:6443: connect: connection refused" node="ip-172-31-29-86" May 27 03:19:43.124100 containerd[1881]: time="2025-05-27T03:19:43.124034789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-86,Uid:0279014dfd4e43909e4f6475cec47c4e,Namespace:kube-system,Attempt:0,}" May 27 03:19:43.131889 containerd[1881]: time="2025-05-27T03:19:43.131714676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-86,Uid:5fd2d34b85449565c7c50bb4661f95e3,Namespace:kube-system,Attempt:0,}" May 27 03:19:43.135875 containerd[1881]: time="2025-05-27T03:19:43.135837104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-86,Uid:08019f8c2eac197d4411f4d2d8a99c62,Namespace:kube-system,Attempt:0,}" May 27 03:19:43.273326 containerd[1881]: time="2025-05-27T03:19:43.273250126Z" level=info msg="connecting to shim 6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360" address="unix:///run/containerd/s/2b21ccd8259ff4f04e018029106945ce8830922396f44bc2d3df48c2be7542fd" namespace=k8s.io protocol=ttrpc version=3 May 27 03:19:43.275752 containerd[1881]: time="2025-05-27T03:19:43.275713217Z" level=info msg="connecting to shim 70e0fb72b7971c246c983d04f7c541aa4d16a0daee84439db83cd7d06e0b12c1" address="unix:///run/containerd/s/b042a7d32cb0900a8062746cd255ebc4a817ca42f911050b2b6e96161407e2b9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:19:43.277118 kubelet[2850]: E0527 03:19:43.276846 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-86?timeout=10s\": dial tcp 172.31.29.86:6443: connect: connection refused" interval="800ms" May 27 03:19:43.280618 containerd[1881]: time="2025-05-27T03:19:43.280219895Z" level=info msg="connecting to shim 87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae" address="unix:///run/containerd/s/c48784f9d19d583ad7d20e9314650ba4633ac1e2eddc472a517428c37b65dcfd" namespace=k8s.io protocol=ttrpc version=3 May 27 03:19:43.390423 systemd[1]: Started cri-containerd-70e0fb72b7971c246c983d04f7c541aa4d16a0daee84439db83cd7d06e0b12c1.scope - libcontainer container 70e0fb72b7971c246c983d04f7c541aa4d16a0daee84439db83cd7d06e0b12c1. May 27 03:19:43.405336 systemd[1]: Started cri-containerd-6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360.scope - libcontainer container 6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360. May 27 03:19:43.412175 systemd[1]: Started cri-containerd-87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae.scope - libcontainer container 87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae. May 27 03:19:43.472906 kubelet[2850]: I0527 03:19:43.472864 2850 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-86" May 27 03:19:43.474277 kubelet[2850]: E0527 03:19:43.474228 2850 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.86:6443/api/v1/nodes\": dial tcp 172.31.29.86:6443: connect: connection refused" node="ip-172-31-29-86" May 27 03:19:43.550232 containerd[1881]: time="2025-05-27T03:19:43.550185182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-86,Uid:5fd2d34b85449565c7c50bb4661f95e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360\"" May 27 03:19:43.552521 containerd[1881]: time="2025-05-27T03:19:43.552456784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-86,Uid:0279014dfd4e43909e4f6475cec47c4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"70e0fb72b7971c246c983d04f7c541aa4d16a0daee84439db83cd7d06e0b12c1\"" May 27 03:19:43.562276 containerd[1881]: time="2025-05-27T03:19:43.562134578Z" level=info msg="CreateContainer within sandbox \"70e0fb72b7971c246c983d04f7c541aa4d16a0daee84439db83cd7d06e0b12c1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:19:43.563183 containerd[1881]: time="2025-05-27T03:19:43.562233376Z" level=info msg="CreateContainer within sandbox \"6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:19:43.576168 containerd[1881]: time="2025-05-27T03:19:43.576132854Z" level=info msg="Container 8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe: CDI devices from CRI Config.CDIDevices: []" May 27 03:19:43.584615 kubelet[2850]: W0527 03:19:43.584544 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:43.584802 kubelet[2850]: E0527 03:19:43.584785 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:43.589845 containerd[1881]: time="2025-05-27T03:19:43.589783759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-86,Uid:08019f8c2eac197d4411f4d2d8a99c62,Namespace:kube-system,Attempt:0,} returns sandbox id \"87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae\"" May 27 03:19:43.593804 containerd[1881]: time="2025-05-27T03:19:43.593338635Z" level=info msg="CreateContainer within sandbox \"87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:19:43.595861 containerd[1881]: time="2025-05-27T03:19:43.595820013Z" level=info msg="CreateContainer within sandbox \"6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe\"" May 27 03:19:43.597762 containerd[1881]: time="2025-05-27T03:19:43.597107117Z" level=info msg="StartContainer for \"8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe\"" May 27 03:19:43.599370 containerd[1881]: time="2025-05-27T03:19:43.599321755Z" level=info msg="connecting to shim 8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe" address="unix:///run/containerd/s/2b21ccd8259ff4f04e018029106945ce8830922396f44bc2d3df48c2be7542fd" protocol=ttrpc version=3 May 27 03:19:43.601859 containerd[1881]: time="2025-05-27T03:19:43.601827702Z" level=info msg="Container ab85a3060a62a5228ec3535f81e66a0a5924284ec0a855e4732f26ea460d539b: CDI devices from CRI Config.CDIDevices: []" May 27 03:19:43.604067 containerd[1881]: time="2025-05-27T03:19:43.604036773Z" level=info msg="Container 10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1: CDI devices from CRI Config.CDIDevices: []" May 27 03:19:43.619801 containerd[1881]: time="2025-05-27T03:19:43.619758975Z" level=info msg="CreateContainer within sandbox \"87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1\"" May 27 03:19:43.620018 containerd[1881]: time="2025-05-27T03:19:43.619992573Z" level=info msg="CreateContainer within sandbox \"70e0fb72b7971c246c983d04f7c541aa4d16a0daee84439db83cd7d06e0b12c1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ab85a3060a62a5228ec3535f81e66a0a5924284ec0a855e4732f26ea460d539b\"" May 27 03:19:43.620961 containerd[1881]: time="2025-05-27T03:19:43.620859758Z" level=info msg="StartContainer for \"ab85a3060a62a5228ec3535f81e66a0a5924284ec0a855e4732f26ea460d539b\"" May 27 03:19:43.621844 containerd[1881]: time="2025-05-27T03:19:43.621812689Z" level=info msg="StartContainer for \"10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1\"" May 27 03:19:43.623951 containerd[1881]: time="2025-05-27T03:19:43.623907088Z" level=info msg="connecting to shim ab85a3060a62a5228ec3535f81e66a0a5924284ec0a855e4732f26ea460d539b" address="unix:///run/containerd/s/b042a7d32cb0900a8062746cd255ebc4a817ca42f911050b2b6e96161407e2b9" protocol=ttrpc version=3 May 27 03:19:43.624960 containerd[1881]: time="2025-05-27T03:19:43.624901354Z" level=info msg="connecting to shim 10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1" address="unix:///run/containerd/s/c48784f9d19d583ad7d20e9314650ba4633ac1e2eddc472a517428c37b65dcfd" protocol=ttrpc version=3 May 27 03:19:43.628430 systemd[1]: Started cri-containerd-8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe.scope - libcontainer container 8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe. May 27 03:19:43.658285 systemd[1]: Started cri-containerd-10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1.scope - libcontainer container 10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1. May 27 03:19:43.674318 systemd[1]: Started cri-containerd-ab85a3060a62a5228ec3535f81e66a0a5924284ec0a855e4732f26ea460d539b.scope - libcontainer container ab85a3060a62a5228ec3535f81e66a0a5924284ec0a855e4732f26ea460d539b. May 27 03:19:43.756721 containerd[1881]: time="2025-05-27T03:19:43.756671994Z" level=info msg="StartContainer for \"8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe\" returns successfully" May 27 03:19:43.796332 containerd[1881]: time="2025-05-27T03:19:43.796286839Z" level=info msg="StartContainer for \"ab85a3060a62a5228ec3535f81e66a0a5924284ec0a855e4732f26ea460d539b\" returns successfully" May 27 03:19:43.810545 containerd[1881]: time="2025-05-27T03:19:43.810499413Z" level=info msg="StartContainer for \"10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1\" returns successfully" May 27 03:19:43.876733 kubelet[2850]: W0527 03:19:43.876649 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:43.876942 kubelet[2850]: E0527 03:19:43.876917 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:43.949852 kubelet[2850]: W0527 03:19:43.948495 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-86&limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:43.949852 kubelet[2850]: E0527 03:19:43.948771 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-86&limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:44.078225 kubelet[2850]: E0527 03:19:44.078169 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-86?timeout=10s\": dial tcp 172.31.29.86:6443: connect: connection refused" interval="1.6s" May 27 03:19:44.244916 kubelet[2850]: W0527 03:19:44.244724 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:44.244916 kubelet[2850]: E0527 03:19:44.244810 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:44.277842 kubelet[2850]: I0527 03:19:44.277306 2850 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-86" May 27 03:19:44.277842 kubelet[2850]: E0527 03:19:44.277652 2850 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.86:6443/api/v1/nodes\": dial tcp 172.31.29.86:6443: connect: connection refused" node="ip-172-31-29-86" May 27 03:19:44.604886 kubelet[2850]: E0527 03:19:44.604433 2850 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:45.466003 kubelet[2850]: W0527 03:19:45.465954 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:45.466003 kubelet[2850]: E0527 03:19:45.466002 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:45.679452 kubelet[2850]: E0527 03:19:45.679406 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-86?timeout=10s\": dial tcp 172.31.29.86:6443: connect: connection refused" interval="3.2s" May 27 03:19:45.836702 kubelet[2850]: W0527 03:19:45.836660 2850 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-86&limit=500&resourceVersion=0": dial tcp 172.31.29.86:6443: connect: connection refused May 27 03:19:45.836702 kubelet[2850]: E0527 03:19:45.836706 2850 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-86&limit=500&resourceVersion=0\": dial tcp 172.31.29.86:6443: connect: connection refused" logger="UnhandledError" May 27 03:19:45.880095 kubelet[2850]: I0527 03:19:45.880038 2850 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-86" May 27 03:19:45.880440 kubelet[2850]: E0527 03:19:45.880409 2850 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.86:6443/api/v1/nodes\": dial tcp 172.31.29.86:6443: connect: connection refused" node="ip-172-31-29-86" May 27 03:19:48.343064 kubelet[2850]: E0527 03:19:48.343021 2850 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-29-86" not found May 27 03:19:48.696810 kubelet[2850]: E0527 03:19:48.696711 2850 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-29-86" not found May 27 03:19:48.885742 kubelet[2850]: E0527 03:19:48.885694 2850 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-86\" not found" node="ip-172-31-29-86" May 27 03:19:49.083686 kubelet[2850]: I0527 03:19:49.083160 2850 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-86" May 27 03:19:49.097087 kubelet[2850]: I0527 03:19:49.097026 2850 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-86" May 27 03:19:49.097378 kubelet[2850]: E0527 03:19:49.097273 2850 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-29-86\": node \"ip-172-31-29-86\" not found" May 27 03:19:49.110246 kubelet[2850]: E0527 03:19:49.110212 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.210335 kubelet[2850]: E0527 03:19:49.210293 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.310821 kubelet[2850]: E0527 03:19:49.310780 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.411540 kubelet[2850]: E0527 03:19:49.411341 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.512033 kubelet[2850]: E0527 03:19:49.511975 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.612743 kubelet[2850]: E0527 03:19:49.612694 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.713294 kubelet[2850]: E0527 03:19:49.713003 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.813821 kubelet[2850]: E0527 03:19:49.813767 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:49.914202 kubelet[2850]: E0527 03:19:49.914142 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:50.014906 kubelet[2850]: E0527 03:19:50.014788 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:50.061239 systemd[1]: Reload requested from client PID 3117 ('systemctl') (unit session-9.scope)... May 27 03:19:50.061260 systemd[1]: Reloading... May 27 03:19:50.115567 kubelet[2850]: E0527 03:19:50.115532 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:50.180110 zram_generator::config[3162]: No configuration found. May 27 03:19:50.216469 kubelet[2850]: E0527 03:19:50.216427 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:50.298279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:19:50.316923 kubelet[2850]: E0527 03:19:50.316870 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:50.417446 kubelet[2850]: E0527 03:19:50.417398 2850 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:50.454856 systemd[1]: Reloading finished in 393 ms. May 27 03:19:50.496355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:50.496753 kubelet[2850]: I0527 03:19:50.496634 2850 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:19:50.520476 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:19:50.520689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:50.520743 systemd[1]: kubelet.service: Consumed 893ms CPU time, 127.2M memory peak. May 27 03:19:50.524782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:50.759479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:50.772002 (kubelet)[3223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:19:50.849629 kubelet[3223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:19:50.849629 kubelet[3223]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 03:19:50.849629 kubelet[3223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:19:50.849629 kubelet[3223]: I0527 03:19:50.849384 3223 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:19:50.859101 kubelet[3223]: I0527 03:19:50.857198 3223 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 03:19:50.859101 kubelet[3223]: I0527 03:19:50.857221 3223 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:19:50.859101 kubelet[3223]: I0527 03:19:50.857506 3223 server.go:934] "Client rotation is on, will bootstrap in background" May 27 03:19:50.859101 kubelet[3223]: I0527 03:19:50.858973 3223 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:19:50.868751 kubelet[3223]: I0527 03:19:50.868712 3223 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:19:50.873620 kubelet[3223]: I0527 03:19:50.873594 3223 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:19:50.878704 kubelet[3223]: I0527 03:19:50.878673 3223 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:19:50.878994 kubelet[3223]: I0527 03:19:50.878981 3223 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 03:19:50.879276 kubelet[3223]: I0527 03:19:50.879242 3223 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:19:50.879550 kubelet[3223]: I0527 03:19:50.879372 3223 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-86","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:19:50.879678 kubelet[3223]: I0527 03:19:50.879669 3223 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:19:50.879735 kubelet[3223]: I0527 03:19:50.879728 3223 container_manager_linux.go:300] "Creating device plugin manager" May 27 03:19:50.879797 kubelet[3223]: I0527 03:19:50.879791 3223 state_mem.go:36] "Initialized new in-memory state store" May 27 03:19:50.879946 kubelet[3223]: I0527 03:19:50.879935 3223 kubelet.go:408] "Attempting to sync node with API server" May 27 03:19:50.880029 kubelet[3223]: I0527 03:19:50.880019 3223 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:19:50.880141 kubelet[3223]: I0527 03:19:50.880131 3223 kubelet.go:314] "Adding apiserver pod source" May 27 03:19:50.880213 kubelet[3223]: I0527 03:19:50.880205 3223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:19:50.884523 kubelet[3223]: I0527 03:19:50.884498 3223 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:19:50.885354 kubelet[3223]: I0527 03:19:50.885330 3223 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:19:50.885993 kubelet[3223]: I0527 03:19:50.885977 3223 server.go:1274] "Started kubelet" May 27 03:19:50.893626 kubelet[3223]: I0527 03:19:50.893598 3223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:19:50.909284 kubelet[3223]: I0527 03:19:50.909222 3223 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:19:50.911297 kubelet[3223]: I0527 03:19:50.911272 3223 server.go:449] "Adding debug handlers to kubelet server" May 27 03:19:50.917588 kubelet[3223]: I0527 03:19:50.917511 3223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:19:50.920140 kubelet[3223]: I0527 03:19:50.920105 3223 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:19:50.923259 kubelet[3223]: I0527 03:19:50.923223 3223 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 03:19:50.923553 kubelet[3223]: E0527 03:19:50.923528 3223 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-86\" not found" May 27 03:19:50.925820 kubelet[3223]: I0527 03:19:50.925248 3223 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 03:19:50.925820 kubelet[3223]: I0527 03:19:50.925416 3223 reconciler.go:26] "Reconciler: start to sync state" May 27 03:19:50.933236 kubelet[3223]: I0527 03:19:50.933203 3223 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:19:50.934101 kubelet[3223]: I0527 03:19:50.933611 3223 factory.go:221] Registration of the systemd container factory successfully May 27 03:19:50.936813 kubelet[3223]: I0527 03:19:50.936244 3223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:19:50.942236 kubelet[3223]: I0527 03:19:50.942204 3223 factory.go:221] Registration of the containerd container factory successfully May 27 03:19:50.945183 kubelet[3223]: I0527 03:19:50.945118 3223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:19:50.947813 kubelet[3223]: E0527 03:19:50.947410 3223 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:19:50.949492 kubelet[3223]: I0527 03:19:50.949466 3223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:19:50.949492 kubelet[3223]: I0527 03:19:50.949496 3223 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 03:19:50.949658 kubelet[3223]: I0527 03:19:50.949516 3223 kubelet.go:2321] "Starting kubelet main sync loop" May 27 03:19:50.949658 kubelet[3223]: E0527 03:19:50.949565 3223 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:19:50.996618 kubelet[3223]: I0527 03:19:50.996587 3223 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 03:19:50.996618 kubelet[3223]: I0527 03:19:50.996613 3223 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 03:19:50.996814 kubelet[3223]: I0527 03:19:50.996656 3223 state_mem.go:36] "Initialized new in-memory state store" May 27 03:19:50.996994 kubelet[3223]: I0527 03:19:50.996973 3223 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:19:50.997059 kubelet[3223]: I0527 03:19:50.996994 3223 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:19:50.997059 kubelet[3223]: I0527 03:19:50.997024 3223 policy_none.go:49] "None policy: Start" May 27 03:19:50.999107 kubelet[3223]: I0527 03:19:50.997891 3223 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 03:19:50.999107 kubelet[3223]: I0527 03:19:50.997921 3223 state_mem.go:35] "Initializing new in-memory state store" May 27 03:19:50.999107 kubelet[3223]: I0527 03:19:50.998195 3223 state_mem.go:75] "Updated machine memory state" May 27 03:19:51.004856 kubelet[3223]: I0527 03:19:51.004825 3223 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:19:51.005056 kubelet[3223]: I0527 03:19:51.005038 3223 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:19:51.005138 kubelet[3223]: I0527 03:19:51.005060 3223 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:19:51.005406 kubelet[3223]: I0527 03:19:51.005364 3223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:19:51.108824 kubelet[3223]: I0527 03:19:51.108790 3223 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-86" May 27 03:19:51.118281 kubelet[3223]: I0527 03:19:51.118234 3223 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-29-86" May 27 03:19:51.118782 kubelet[3223]: I0527 03:19:51.118539 3223 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-86" May 27 03:19:51.126461 kubelet[3223]: I0527 03:19:51.126424 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:51.126733 kubelet[3223]: I0527 03:19:51.126487 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:51.126733 kubelet[3223]: I0527 03:19:51.126512 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:51.126733 kubelet[3223]: I0527 03:19:51.126532 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:51.126733 kubelet[3223]: I0527 03:19:51.126660 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fd2d34b85449565c7c50bb4661f95e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-86\" (UID: \"5fd2d34b85449565c7c50bb4661f95e3\") " pod="kube-system/kube-controller-manager-ip-172-31-29-86" May 27 03:19:51.126733 kubelet[3223]: I0527 03:19:51.126688 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0279014dfd4e43909e4f6475cec47c4e-ca-certs\") pod \"kube-apiserver-ip-172-31-29-86\" (UID: \"0279014dfd4e43909e4f6475cec47c4e\") " pod="kube-system/kube-apiserver-ip-172-31-29-86" May 27 03:19:51.126973 kubelet[3223]: I0527 03:19:51.126711 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0279014dfd4e43909e4f6475cec47c4e-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-86\" (UID: \"0279014dfd4e43909e4f6475cec47c4e\") " pod="kube-system/kube-apiserver-ip-172-31-29-86" May 27 03:19:51.126973 kubelet[3223]: I0527 03:19:51.126789 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0279014dfd4e43909e4f6475cec47c4e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-86\" (UID: \"0279014dfd4e43909e4f6475cec47c4e\") " pod="kube-system/kube-apiserver-ip-172-31-29-86" May 27 03:19:51.227795 kubelet[3223]: I0527 03:19:51.227647 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08019f8c2eac197d4411f4d2d8a99c62-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-86\" (UID: \"08019f8c2eac197d4411f4d2d8a99c62\") " pod="kube-system/kube-scheduler-ip-172-31-29-86" May 27 03:19:51.881353 kubelet[3223]: I0527 03:19:51.881300 3223 apiserver.go:52] "Watching apiserver" May 27 03:19:51.926430 kubelet[3223]: I0527 03:19:51.926354 3223 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 03:19:51.967606 kubelet[3223]: I0527 03:19:51.967529 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-86" podStartSLOduration=0.967504414 podStartE2EDuration="967.504414ms" podCreationTimestamp="2025-05-27 03:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:19:51.954310199 +0000 UTC m=+1.174558942" watchObservedRunningTime="2025-05-27 03:19:51.967504414 +0000 UTC m=+1.187753130" May 27 03:19:51.985007 kubelet[3223]: I0527 03:19:51.984946 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-86" podStartSLOduration=0.984901068 podStartE2EDuration="984.901068ms" podCreationTimestamp="2025-05-27 03:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:19:51.967856975 +0000 UTC m=+1.188105678" watchObservedRunningTime="2025-05-27 03:19:51.984901068 +0000 UTC m=+1.205149759" May 27 03:19:51.985679 kubelet[3223]: I0527 03:19:51.985633 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-86" podStartSLOduration=0.985619551 podStartE2EDuration="985.619551ms" podCreationTimestamp="2025-05-27 03:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:19:51.981217549 +0000 UTC m=+1.201466249" watchObservedRunningTime="2025-05-27 03:19:51.985619551 +0000 UTC m=+1.205868250" May 27 03:19:51.991625 kubelet[3223]: E0527 03:19:51.991587 3223 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-86\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-86" May 27 03:19:53.985184 update_engine[1871]: I20250527 03:19:53.985103 1871 update_attempter.cc:509] Updating boot flags... May 27 03:19:56.937196 kubelet[3223]: I0527 03:19:56.937152 3223 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:19:56.937871 kubelet[3223]: I0527 03:19:56.937820 3223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:19:56.937921 containerd[1881]: time="2025-05-27T03:19:56.937583896Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:19:57.347910 systemd[1]: Created slice kubepods-besteffort-pod7a740b29_3dcc_4ea2_88a5_f57616bc6914.slice - libcontainer container kubepods-besteffort-pod7a740b29_3dcc_4ea2_88a5_f57616bc6914.slice. May 27 03:19:57.370158 kubelet[3223]: I0527 03:19:57.370107 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a740b29-3dcc-4ea2-88a5-f57616bc6914-kube-proxy\") pod \"kube-proxy-4ftcw\" (UID: \"7a740b29-3dcc-4ea2-88a5-f57616bc6914\") " pod="kube-system/kube-proxy-4ftcw" May 27 03:19:57.370158 kubelet[3223]: I0527 03:19:57.370152 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a740b29-3dcc-4ea2-88a5-f57616bc6914-xtables-lock\") pod \"kube-proxy-4ftcw\" (UID: \"7a740b29-3dcc-4ea2-88a5-f57616bc6914\") " pod="kube-system/kube-proxy-4ftcw" May 27 03:19:57.370351 kubelet[3223]: I0527 03:19:57.370180 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56hs\" (UniqueName: \"kubernetes.io/projected/7a740b29-3dcc-4ea2-88a5-f57616bc6914-kube-api-access-s56hs\") pod \"kube-proxy-4ftcw\" (UID: \"7a740b29-3dcc-4ea2-88a5-f57616bc6914\") " pod="kube-system/kube-proxy-4ftcw" May 27 03:19:57.370351 kubelet[3223]: I0527 03:19:57.370203 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a740b29-3dcc-4ea2-88a5-f57616bc6914-lib-modules\") pod \"kube-proxy-4ftcw\" (UID: \"7a740b29-3dcc-4ea2-88a5-f57616bc6914\") " pod="kube-system/kube-proxy-4ftcw" May 27 03:19:57.488089 kubelet[3223]: E0527 03:19:57.487998 3223 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 03:19:57.488089 kubelet[3223]: E0527 03:19:57.488049 3223 projected.go:194] Error preparing data for projected volume kube-api-access-s56hs for pod kube-system/kube-proxy-4ftcw: configmap "kube-root-ca.crt" not found May 27 03:19:57.488434 kubelet[3223]: E0527 03:19:57.488388 3223 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a740b29-3dcc-4ea2-88a5-f57616bc6914-kube-api-access-s56hs podName:7a740b29-3dcc-4ea2-88a5-f57616bc6914 nodeName:}" failed. No retries permitted until 2025-05-27 03:19:57.988333687 +0000 UTC m=+7.208582386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s56hs" (UniqueName: "kubernetes.io/projected/7a740b29-3dcc-4ea2-88a5-f57616bc6914-kube-api-access-s56hs") pod "kube-proxy-4ftcw" (UID: "7a740b29-3dcc-4ea2-88a5-f57616bc6914") : configmap "kube-root-ca.crt" not found May 27 03:19:58.042747 systemd[1]: Created slice kubepods-besteffort-pod03a46d5a_12bb_4765_908d_1477aa0a0fb3.slice - libcontainer container kubepods-besteffort-pod03a46d5a_12bb_4765_908d_1477aa0a0fb3.slice. May 27 03:19:58.075064 kubelet[3223]: I0527 03:19:58.074991 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03a46d5a-12bb-4765-908d-1477aa0a0fb3-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-8g6mf\" (UID: \"03a46d5a-12bb-4765-908d-1477aa0a0fb3\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8g6mf" May 27 03:19:58.076026 kubelet[3223]: I0527 03:19:58.075047 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4np4\" (UniqueName: \"kubernetes.io/projected/03a46d5a-12bb-4765-908d-1477aa0a0fb3-kube-api-access-p4np4\") pod \"tigera-operator-7c5755cdcb-8g6mf\" (UID: \"03a46d5a-12bb-4765-908d-1477aa0a0fb3\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8g6mf" May 27 03:19:58.257677 containerd[1881]: time="2025-05-27T03:19:58.257617339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ftcw,Uid:7a740b29-3dcc-4ea2-88a5-f57616bc6914,Namespace:kube-system,Attempt:0,}" May 27 03:19:58.287341 containerd[1881]: time="2025-05-27T03:19:58.287153467Z" level=info msg="connecting to shim 5d2f67d7191f8e5cc0cfbe28764b078a8f201529b1783b10f21ebc8580d38f39" address="unix:///run/containerd/s/9f8fdddfe01941881b299549859e04589ccecad224a2c055e1f7c49be02bf624" namespace=k8s.io protocol=ttrpc version=3 May 27 03:19:58.320310 systemd[1]: Started cri-containerd-5d2f67d7191f8e5cc0cfbe28764b078a8f201529b1783b10f21ebc8580d38f39.scope - libcontainer container 5d2f67d7191f8e5cc0cfbe28764b078a8f201529b1783b10f21ebc8580d38f39. May 27 03:19:58.347396 containerd[1881]: time="2025-05-27T03:19:58.347354882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8g6mf,Uid:03a46d5a-12bb-4765-908d-1477aa0a0fb3,Namespace:tigera-operator,Attempt:0,}" May 27 03:19:58.350387 containerd[1881]: time="2025-05-27T03:19:58.350344543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ftcw,Uid:7a740b29-3dcc-4ea2-88a5-f57616bc6914,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d2f67d7191f8e5cc0cfbe28764b078a8f201529b1783b10f21ebc8580d38f39\"" May 27 03:19:58.355944 containerd[1881]: time="2025-05-27T03:19:58.355903294Z" level=info msg="CreateContainer within sandbox \"5d2f67d7191f8e5cc0cfbe28764b078a8f201529b1783b10f21ebc8580d38f39\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:19:58.375156 containerd[1881]: time="2025-05-27T03:19:58.374554303Z" level=info msg="connecting to shim 3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f" address="unix:///run/containerd/s/b9de8f355ad8ec20e6b5605dea5f51d9b6c035edec69c7d4e71489ca6f11a3fc" namespace=k8s.io protocol=ttrpc version=3 May 27 03:19:58.375499 containerd[1881]: time="2025-05-27T03:19:58.375433195Z" level=info msg="Container 20b5399b91eaf46a5b8333fabbd31de1db51c03a06d8dc89b4fd2690c6eb2a9b: CDI devices from CRI Config.CDIDevices: []" May 27 03:19:58.385148 containerd[1881]: time="2025-05-27T03:19:58.385107198Z" level=info msg="CreateContainer within sandbox \"5d2f67d7191f8e5cc0cfbe28764b078a8f201529b1783b10f21ebc8580d38f39\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20b5399b91eaf46a5b8333fabbd31de1db51c03a06d8dc89b4fd2690c6eb2a9b\"" May 27 03:19:58.387094 containerd[1881]: time="2025-05-27T03:19:58.386354964Z" level=info msg="StartContainer for \"20b5399b91eaf46a5b8333fabbd31de1db51c03a06d8dc89b4fd2690c6eb2a9b\"" May 27 03:19:58.392099 containerd[1881]: time="2025-05-27T03:19:58.391610283Z" level=info msg="connecting to shim 20b5399b91eaf46a5b8333fabbd31de1db51c03a06d8dc89b4fd2690c6eb2a9b" address="unix:///run/containerd/s/9f8fdddfe01941881b299549859e04589ccecad224a2c055e1f7c49be02bf624" protocol=ttrpc version=3 May 27 03:19:58.416445 systemd[1]: Started cri-containerd-3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f.scope - libcontainer container 3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f. May 27 03:19:58.423978 systemd[1]: Started cri-containerd-20b5399b91eaf46a5b8333fabbd31de1db51c03a06d8dc89b4fd2690c6eb2a9b.scope - libcontainer container 20b5399b91eaf46a5b8333fabbd31de1db51c03a06d8dc89b4fd2690c6eb2a9b. May 27 03:19:58.499775 containerd[1881]: time="2025-05-27T03:19:58.499728156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8g6mf,Uid:03a46d5a-12bb-4765-908d-1477aa0a0fb3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f\"" May 27 03:19:58.500305 containerd[1881]: time="2025-05-27T03:19:58.500274106Z" level=info msg="StartContainer for \"20b5399b91eaf46a5b8333fabbd31de1db51c03a06d8dc89b4fd2690c6eb2a9b\" returns successfully" May 27 03:19:58.502206 containerd[1881]: time="2025-05-27T03:19:58.502177894Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:19:59.007107 kubelet[3223]: I0527 03:19:59.006695 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4ftcw" podStartSLOduration=2.006669319 podStartE2EDuration="2.006669319s" podCreationTimestamp="2025-05-27 03:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:19:59.006461069 +0000 UTC m=+8.226709768" watchObservedRunningTime="2025-05-27 03:19:59.006669319 +0000 UTC m=+8.226918017" May 27 03:20:00.224181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3704847878.mount: Deactivated successfully. May 27 03:20:01.893361 containerd[1881]: time="2025-05-27T03:20:01.893300752Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:01.897933 containerd[1881]: time="2025-05-27T03:20:01.897511923Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:20:01.900511 containerd[1881]: time="2025-05-27T03:20:01.900449706Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:01.920980 containerd[1881]: time="2025-05-27T03:20:01.920869999Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:01.927226 containerd[1881]: time="2025-05-27T03:20:01.927151834Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.424935886s" May 27 03:20:01.927226 containerd[1881]: time="2025-05-27T03:20:01.927218766Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:20:01.996621 containerd[1881]: time="2025-05-27T03:20:01.996541414Z" level=info msg="CreateContainer within sandbox \"3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:20:02.080821 containerd[1881]: time="2025-05-27T03:20:02.062269060Z" level=info msg="Container 5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:02.110898 containerd[1881]: time="2025-05-27T03:20:02.110842614Z" level=info msg="CreateContainer within sandbox \"3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\"" May 27 03:20:02.112906 containerd[1881]: time="2025-05-27T03:20:02.112872829Z" level=info msg="StartContainer for \"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\"" May 27 03:20:02.114589 containerd[1881]: time="2025-05-27T03:20:02.114550330Z" level=info msg="connecting to shim 5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956" address="unix:///run/containerd/s/b9de8f355ad8ec20e6b5605dea5f51d9b6c035edec69c7d4e71489ca6f11a3fc" protocol=ttrpc version=3 May 27 03:20:02.148531 systemd[1]: Started cri-containerd-5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956.scope - libcontainer container 5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956. May 27 03:20:02.298556 containerd[1881]: time="2025-05-27T03:20:02.298434090Z" level=info msg="StartContainer for \"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\" returns successfully" May 27 03:20:07.505258 sudo[2294]: pam_unix(sudo:session): session closed for user root May 27 03:20:07.528486 sshd[2293]: Connection closed by 139.178.68.195 port 56318 May 27 03:20:07.529764 sshd-session[2291]: pam_unix(sshd:session): session closed for user core May 27 03:20:07.539469 systemd[1]: sshd@8-172.31.29.86:22-139.178.68.195:56318.service: Deactivated successfully. May 27 03:20:07.546781 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:20:07.547603 systemd[1]: session-9.scope: Consumed 4.738s CPU time, 152.4M memory peak. May 27 03:20:07.551572 systemd-logind[1869]: Session 9 logged out. Waiting for processes to exit. May 27 03:20:07.555541 systemd-logind[1869]: Removed session 9. May 27 03:20:11.849172 sshd[2516]: Connection closed by authenticating user root 46.235.84.183 port 43966 [preauth] May 27 03:20:11.851683 systemd[1]: sshd@9-172.31.29.86:22-46.235.84.183:43966.service: Deactivated successfully. May 27 03:20:12.128278 systemd[1]: Started sshd@10-172.31.29.86:22-46.235.84.183:35356.service - OpenSSH per-connection server daemon (46.235.84.183:35356). May 27 03:20:12.335425 kubelet[3223]: I0527 03:20:12.334824 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-8g6mf" podStartSLOduration=11.905012575 podStartE2EDuration="15.334785935s" podCreationTimestamp="2025-05-27 03:19:57 +0000 UTC" firstStartedPulling="2025-05-27 03:19:58.50171684 +0000 UTC m=+7.721965520" lastFinishedPulling="2025-05-27 03:20:01.931490194 +0000 UTC m=+11.151738880" observedRunningTime="2025-05-27 03:20:03.097606159 +0000 UTC m=+12.317854859" watchObservedRunningTime="2025-05-27 03:20:12.334785935 +0000 UTC m=+21.555034635" May 27 03:20:12.351387 systemd[1]: Created slice kubepods-besteffort-pod643fee3b_a6ba_400a_94be_ecd2eca832d5.slice - libcontainer container kubepods-besteffort-pod643fee3b_a6ba_400a_94be_ecd2eca832d5.slice. May 27 03:20:12.399545 kubelet[3223]: I0527 03:20:12.399181 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/643fee3b-a6ba-400a-94be-ecd2eca832d5-tigera-ca-bundle\") pod \"calico-typha-78f6cd676c-n4qz7\" (UID: \"643fee3b-a6ba-400a-94be-ecd2eca832d5\") " pod="calico-system/calico-typha-78f6cd676c-n4qz7" May 27 03:20:12.399545 kubelet[3223]: I0527 03:20:12.399232 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/643fee3b-a6ba-400a-94be-ecd2eca832d5-typha-certs\") pod \"calico-typha-78f6cd676c-n4qz7\" (UID: \"643fee3b-a6ba-400a-94be-ecd2eca832d5\") " pod="calico-system/calico-typha-78f6cd676c-n4qz7" May 27 03:20:12.399545 kubelet[3223]: I0527 03:20:12.399260 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhs5\" (UniqueName: \"kubernetes.io/projected/643fee3b-a6ba-400a-94be-ecd2eca832d5-kube-api-access-dxhs5\") pod \"calico-typha-78f6cd676c-n4qz7\" (UID: \"643fee3b-a6ba-400a-94be-ecd2eca832d5\") " pod="calico-system/calico-typha-78f6cd676c-n4qz7" May 27 03:20:12.655040 systemd[1]: Created slice kubepods-besteffort-pod94ce2812_346e_41f8_815c_4c0e82a71f4d.slice - libcontainer container kubepods-besteffort-pod94ce2812_346e_41f8_815c_4c0e82a71f4d.slice. May 27 03:20:12.660527 containerd[1881]: time="2025-05-27T03:20:12.660298512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f6cd676c-n4qz7,Uid:643fee3b-a6ba-400a-94be-ecd2eca832d5,Namespace:calico-system,Attempt:0,}" May 27 03:20:12.703090 kubelet[3223]: I0527 03:20:12.702671 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-cni-net-dir\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703090 kubelet[3223]: I0527 03:20:12.702727 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-cni-log-dir\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703090 kubelet[3223]: I0527 03:20:12.702752 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-var-lib-calico\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703090 kubelet[3223]: I0527 03:20:12.702783 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94ce2812-346e-41f8-815c-4c0e82a71f4d-tigera-ca-bundle\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703090 kubelet[3223]: I0527 03:20:12.702806 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crrhh\" (UniqueName: \"kubernetes.io/projected/94ce2812-346e-41f8-815c-4c0e82a71f4d-kube-api-access-crrhh\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703403 kubelet[3223]: I0527 03:20:12.702833 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-cni-bin-dir\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703403 kubelet[3223]: I0527 03:20:12.702853 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-xtables-lock\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703403 kubelet[3223]: I0527 03:20:12.702874 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-flexvol-driver-host\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703403 kubelet[3223]: I0527 03:20:12.702899 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-lib-modules\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703403 kubelet[3223]: I0527 03:20:12.702921 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-policysync\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703592 kubelet[3223]: I0527 03:20:12.702943 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/94ce2812-346e-41f8-815c-4c0e82a71f4d-var-run-calico\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.703592 kubelet[3223]: I0527 03:20:12.702967 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/94ce2812-346e-41f8-815c-4c0e82a71f4d-node-certs\") pod \"calico-node-rcpc9\" (UID: \"94ce2812-346e-41f8-815c-4c0e82a71f4d\") " pod="calico-system/calico-node-rcpc9" May 27 03:20:12.769285 containerd[1881]: time="2025-05-27T03:20:12.769232875Z" level=info msg="connecting to shim eb1f3a232f26c3780e7b6d2edc747fbe645992b08a75355a025f05175d8776fe" address="unix:///run/containerd/s/262d286ad8559960ea9772cf5fd0067aabc37a62aca4380536e7451535316001" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:12.810781 kubelet[3223]: E0527 03:20:12.810744 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.810781 kubelet[3223]: W0527 03:20:12.810767 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.810962 kubelet[3223]: E0527 03:20:12.810792 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.813291 systemd[1]: Started cri-containerd-eb1f3a232f26c3780e7b6d2edc747fbe645992b08a75355a025f05175d8776fe.scope - libcontainer container eb1f3a232f26c3780e7b6d2edc747fbe645992b08a75355a025f05175d8776fe. May 27 03:20:12.825200 kubelet[3223]: E0527 03:20:12.825165 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.825200 kubelet[3223]: W0527 03:20:12.825198 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.825392 kubelet[3223]: E0527 03:20:12.825227 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.843091 kubelet[3223]: E0527 03:20:12.843004 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.843091 kubelet[3223]: W0527 03:20:12.843043 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.843395 kubelet[3223]: E0527 03:20:12.843064 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.898293 containerd[1881]: time="2025-05-27T03:20:12.898239031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f6cd676c-n4qz7,Uid:643fee3b-a6ba-400a-94be-ecd2eca832d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb1f3a232f26c3780e7b6d2edc747fbe645992b08a75355a025f05175d8776fe\"" May 27 03:20:12.901665 containerd[1881]: time="2025-05-27T03:20:12.901580021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:20:12.965283 kubelet[3223]: E0527 03:20:12.964345 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zhsw" podUID="77d27265-0497-40a5-a7ec-8f874b3611ea" May 27 03:20:12.967588 containerd[1881]: time="2025-05-27T03:20:12.967543770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rcpc9,Uid:94ce2812-346e-41f8-815c-4c0e82a71f4d,Namespace:calico-system,Attempt:0,}" May 27 03:20:12.974145 kubelet[3223]: E0527 03:20:12.974096 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.974472 kubelet[3223]: W0527 03:20:12.974321 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.974472 kubelet[3223]: E0527 03:20:12.974388 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.975009 kubelet[3223]: E0527 03:20:12.974936 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.975009 kubelet[3223]: W0527 03:20:12.974954 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.975009 kubelet[3223]: E0527 03:20:12.974971 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.975586 kubelet[3223]: E0527 03:20:12.975546 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.975586 kubelet[3223]: W0527 03:20:12.975560 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.975782 kubelet[3223]: E0527 03:20:12.975714 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.976124 kubelet[3223]: E0527 03:20:12.976108 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.976297 kubelet[3223]: W0527 03:20:12.976185 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.976297 kubelet[3223]: E0527 03:20:12.976201 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.976820 kubelet[3223]: E0527 03:20:12.976750 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.977169 kubelet[3223]: W0527 03:20:12.977104 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.977169 kubelet[3223]: E0527 03:20:12.977124 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.978496 kubelet[3223]: E0527 03:20:12.978355 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.978496 kubelet[3223]: W0527 03:20:12.978379 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.978496 kubelet[3223]: E0527 03:20:12.978393 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.978857 kubelet[3223]: E0527 03:20:12.978802 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.978857 kubelet[3223]: W0527 03:20:12.978817 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.978857 kubelet[3223]: E0527 03:20:12.978834 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.979356 kubelet[3223]: E0527 03:20:12.979322 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.979527 kubelet[3223]: W0527 03:20:12.979336 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.979527 kubelet[3223]: E0527 03:20:12.979472 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.980082 kubelet[3223]: E0527 03:20:12.980053 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.980405 kubelet[3223]: W0527 03:20:12.980387 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.980573 kubelet[3223]: E0527 03:20:12.980468 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.980980 kubelet[3223]: E0527 03:20:12.980934 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.980980 kubelet[3223]: W0527 03:20:12.980948 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.980980 kubelet[3223]: E0527 03:20:12.980962 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.981705 kubelet[3223]: E0527 03:20:12.981626 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.981705 kubelet[3223]: W0527 03:20:12.981641 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.981705 kubelet[3223]: E0527 03:20:12.981656 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.982103 kubelet[3223]: E0527 03:20:12.982089 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.982608 kubelet[3223]: W0527 03:20:12.982176 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.982608 kubelet[3223]: E0527 03:20:12.982193 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.983066 kubelet[3223]: E0527 03:20:12.982990 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.983066 kubelet[3223]: W0527 03:20:12.983005 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.983066 kubelet[3223]: E0527 03:20:12.983020 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.983515 kubelet[3223]: E0527 03:20:12.983421 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.983515 kubelet[3223]: W0527 03:20:12.983436 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.983515 kubelet[3223]: E0527 03:20:12.983450 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.983965 kubelet[3223]: E0527 03:20:12.983889 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.983965 kubelet[3223]: W0527 03:20:12.983903 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.983965 kubelet[3223]: E0527 03:20:12.983916 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.984408 kubelet[3223]: E0527 03:20:12.984319 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.984408 kubelet[3223]: W0527 03:20:12.984333 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.984408 kubelet[3223]: E0527 03:20:12.984346 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.985421 kubelet[3223]: E0527 03:20:12.985347 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.985421 kubelet[3223]: W0527 03:20:12.985362 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.985421 kubelet[3223]: E0527 03:20:12.985376 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.985803 kubelet[3223]: E0527 03:20:12.985787 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.985929 kubelet[3223]: W0527 03:20:12.985866 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.985929 kubelet[3223]: E0527 03:20:12.985883 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.986491 kubelet[3223]: E0527 03:20:12.986300 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.986491 kubelet[3223]: W0527 03:20:12.986315 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.986491 kubelet[3223]: E0527 03:20:12.986328 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.987372 kubelet[3223]: E0527 03:20:12.987243 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:12.987372 kubelet[3223]: W0527 03:20:12.987259 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:12.987372 kubelet[3223]: E0527 03:20:12.987275 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:12.997912 containerd[1881]: time="2025-05-27T03:20:12.997858383Z" level=info msg="connecting to shim 9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f" address="unix:///run/containerd/s/14d6ae509b6ede3b514f13eecbccd197dca94e4da6a421c6ad388ed86f861d0c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:13.006882 kubelet[3223]: E0527 03:20:13.006792 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.006882 kubelet[3223]: W0527 03:20:13.006822 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.006882 kubelet[3223]: E0527 03:20:13.006849 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.008477 kubelet[3223]: I0527 03:20:13.008143 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77d27265-0497-40a5-a7ec-8f874b3611ea-socket-dir\") pod \"csi-node-driver-5zhsw\" (UID: \"77d27265-0497-40a5-a7ec-8f874b3611ea\") " pod="calico-system/csi-node-driver-5zhsw" May 27 03:20:13.009181 kubelet[3223]: E0527 03:20:13.009157 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.009181 kubelet[3223]: W0527 03:20:13.009180 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.009316 kubelet[3223]: E0527 03:20:13.009225 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.010824 kubelet[3223]: E0527 03:20:13.010541 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.010824 kubelet[3223]: W0527 03:20:13.010562 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.010824 kubelet[3223]: E0527 03:20:13.010652 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.011030 kubelet[3223]: E0527 03:20:13.010980 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.011030 kubelet[3223]: W0527 03:20:13.010993 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.011030 kubelet[3223]: E0527 03:20:13.011011 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.012259 kubelet[3223]: I0527 03:20:13.011279 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77d27265-0497-40a5-a7ec-8f874b3611ea-varrun\") pod \"csi-node-driver-5zhsw\" (UID: \"77d27265-0497-40a5-a7ec-8f874b3611ea\") " pod="calico-system/csi-node-driver-5zhsw" May 27 03:20:13.012308 kubelet[3223]: E0527 03:20:13.012298 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.012348 kubelet[3223]: W0527 03:20:13.012315 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.012393 kubelet[3223]: E0527 03:20:13.012352 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.013047 kubelet[3223]: E0527 03:20:13.013026 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.013047 kubelet[3223]: W0527 03:20:13.013046 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.013985 kubelet[3223]: E0527 03:20:13.013259 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.013985 kubelet[3223]: E0527 03:20:13.013326 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.013985 kubelet[3223]: W0527 03:20:13.013340 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.013985 kubelet[3223]: E0527 03:20:13.013353 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.013985 kubelet[3223]: I0527 03:20:13.013403 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgzc\" (UniqueName: \"kubernetes.io/projected/77d27265-0497-40a5-a7ec-8f874b3611ea-kube-api-access-fbgzc\") pod \"csi-node-driver-5zhsw\" (UID: \"77d27265-0497-40a5-a7ec-8f874b3611ea\") " pod="calico-system/csi-node-driver-5zhsw" May 27 03:20:13.013985 kubelet[3223]: E0527 03:20:13.013740 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.013985 kubelet[3223]: W0527 03:20:13.013751 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.013985 kubelet[3223]: E0527 03:20:13.013768 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.014383 kubelet[3223]: E0527 03:20:13.014099 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.014383 kubelet[3223]: W0527 03:20:13.014111 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.014383 kubelet[3223]: E0527 03:20:13.014137 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.014383 kubelet[3223]: E0527 03:20:13.014362 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.014383 kubelet[3223]: W0527 03:20:13.014373 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.014599 kubelet[3223]: E0527 03:20:13.014385 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.014599 kubelet[3223]: I0527 03:20:13.014427 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77d27265-0497-40a5-a7ec-8f874b3611ea-registration-dir\") pod \"csi-node-driver-5zhsw\" (UID: \"77d27265-0497-40a5-a7ec-8f874b3611ea\") " pod="calico-system/csi-node-driver-5zhsw" May 27 03:20:13.015826 kubelet[3223]: E0527 03:20:13.014709 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.015826 kubelet[3223]: W0527 03:20:13.014723 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.015826 kubelet[3223]: E0527 03:20:13.014739 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.015826 kubelet[3223]: E0527 03:20:13.014926 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.015826 kubelet[3223]: W0527 03:20:13.014955 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.015826 kubelet[3223]: E0527 03:20:13.014977 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.015826 kubelet[3223]: E0527 03:20:13.015269 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.015826 kubelet[3223]: W0527 03:20:13.015279 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.015826 kubelet[3223]: E0527 03:20:13.015292 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.017871 kubelet[3223]: I0527 03:20:13.015340 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d27265-0497-40a5-a7ec-8f874b3611ea-kubelet-dir\") pod \"csi-node-driver-5zhsw\" (UID: \"77d27265-0497-40a5-a7ec-8f874b3611ea\") " pod="calico-system/csi-node-driver-5zhsw" May 27 03:20:13.017871 kubelet[3223]: E0527 03:20:13.015594 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.017871 kubelet[3223]: W0527 03:20:13.015606 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.017871 kubelet[3223]: E0527 03:20:13.015619 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.017871 kubelet[3223]: E0527 03:20:13.016347 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.017871 kubelet[3223]: W0527 03:20:13.016359 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.017871 kubelet[3223]: E0527 03:20:13.016374 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.047962 systemd[1]: Started cri-containerd-9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f.scope - libcontainer container 9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f. May 27 03:20:13.116648 kubelet[3223]: E0527 03:20:13.116573 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.117091 kubelet[3223]: W0527 03:20:13.116826 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.117091 kubelet[3223]: E0527 03:20:13.116859 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.119063 kubelet[3223]: E0527 03:20:13.119023 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.119365 kubelet[3223]: W0527 03:20:13.119044 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.119365 kubelet[3223]: E0527 03:20:13.119262 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.120499 kubelet[3223]: E0527 03:20:13.120456 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.120499 kubelet[3223]: W0527 03:20:13.120475 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.121122 kubelet[3223]: E0527 03:20:13.121011 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.122185 kubelet[3223]: E0527 03:20:13.122155 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.122368 kubelet[3223]: W0527 03:20:13.122264 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.122446 kubelet[3223]: E0527 03:20:13.122433 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.122850 kubelet[3223]: E0527 03:20:13.122788 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.124277 kubelet[3223]: W0527 03:20:13.124108 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.124277 kubelet[3223]: E0527 03:20:13.124181 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.124833 kubelet[3223]: E0527 03:20:13.124803 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.124980 kubelet[3223]: W0527 03:20:13.124912 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.125128 kubelet[3223]: E0527 03:20:13.125065 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.126088 kubelet[3223]: E0527 03:20:13.126037 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.126088 kubelet[3223]: W0527 03:20:13.126053 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.126570 kubelet[3223]: E0527 03:20:13.126490 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.128151 containerd[1881]: time="2025-05-27T03:20:13.127475479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rcpc9,Uid:94ce2812-346e-41f8-815c-4c0e82a71f4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f\"" May 27 03:20:13.128417 kubelet[3223]: E0527 03:20:13.128360 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.128417 kubelet[3223]: W0527 03:20:13.128375 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.129445 kubelet[3223]: E0527 03:20:13.129195 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.129445 kubelet[3223]: W0527 03:20:13.129214 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.130119 kubelet[3223]: E0527 03:20:13.129674 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.130119 kubelet[3223]: E0527 03:20:13.129770 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.130947 kubelet[3223]: E0527 03:20:13.130660 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.131045 kubelet[3223]: W0527 03:20:13.131021 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.131345 kubelet[3223]: E0527 03:20:13.131216 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.131720 kubelet[3223]: E0527 03:20:13.131686 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.131720 kubelet[3223]: W0527 03:20:13.131701 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.131990 kubelet[3223]: E0527 03:20:13.131943 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.132148 kubelet[3223]: E0527 03:20:13.132110 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.132148 kubelet[3223]: W0527 03:20:13.132123 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.132397 kubelet[3223]: E0527 03:20:13.132276 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.132463 kubelet[3223]: E0527 03:20:13.132436 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.132463 kubelet[3223]: W0527 03:20:13.132446 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.132654 kubelet[3223]: E0527 03:20:13.132640 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.132705 kubelet[3223]: W0527 03:20:13.132656 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.132793 kubelet[3223]: E0527 03:20:13.132648 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.132793 kubelet[3223]: E0527 03:20:13.132777 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.133951 kubelet[3223]: E0527 03:20:13.133932 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.134101 kubelet[3223]: W0527 03:20:13.134042 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.134317 kubelet[3223]: E0527 03:20:13.134260 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.134578 kubelet[3223]: E0527 03:20:13.134542 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.134578 kubelet[3223]: W0527 03:20:13.134556 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.135141 kubelet[3223]: E0527 03:20:13.135107 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.135436 kubelet[3223]: E0527 03:20:13.135403 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.135436 kubelet[3223]: W0527 03:20:13.135417 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.136103 kubelet[3223]: E0527 03:20:13.135665 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.136873 kubelet[3223]: E0527 03:20:13.136844 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.137093 kubelet[3223]: W0527 03:20:13.137048 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.137386 kubelet[3223]: E0527 03:20:13.137340 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.138342 kubelet[3223]: E0527 03:20:13.138309 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.138342 kubelet[3223]: W0527 03:20:13.138323 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.138640 kubelet[3223]: E0527 03:20:13.138475 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.138865 kubelet[3223]: E0527 03:20:13.138855 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.138938 kubelet[3223]: W0527 03:20:13.138929 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.139156 kubelet[3223]: E0527 03:20:13.139130 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.140755 kubelet[3223]: E0527 03:20:13.140740 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.140917 kubelet[3223]: W0527 03:20:13.140833 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.140998 kubelet[3223]: E0527 03:20:13.140983 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.141701 kubelet[3223]: E0527 03:20:13.141263 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.141701 kubelet[3223]: W0527 03:20:13.141640 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.141887 kubelet[3223]: E0527 03:20:13.141824 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.142311 kubelet[3223]: E0527 03:20:13.142163 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.142603 kubelet[3223]: W0527 03:20:13.142515 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.142751 kubelet[3223]: E0527 03:20:13.142690 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.143094 kubelet[3223]: E0527 03:20:13.143006 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.143094 kubelet[3223]: W0527 03:20:13.143021 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.143094 kubelet[3223]: E0527 03:20:13.143035 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.143577 kubelet[3223]: E0527 03:20:13.143567 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.143628 kubelet[3223]: W0527 03:20:13.143620 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.143672 kubelet[3223]: E0527 03:20:13.143664 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.159403 kubelet[3223]: E0527 03:20:13.159354 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:13.159661 kubelet[3223]: W0527 03:20:13.159379 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:13.159661 kubelet[3223]: E0527 03:20:13.159523 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:13.479307 sshd[3890]: Connection closed by authenticating user root 46.235.84.183 port 35356 [preauth] May 27 03:20:13.482126 systemd[1]: sshd@10-172.31.29.86:22-46.235.84.183:35356.service: Deactivated successfully. May 27 03:20:13.763394 systemd[1]: Started sshd@11-172.31.29.86:22-46.235.84.183:37014.service - OpenSSH per-connection server daemon (46.235.84.183:37014). May 27 03:20:14.431716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489716551.mount: Deactivated successfully. May 27 03:20:14.953385 kubelet[3223]: E0527 03:20:14.953334 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zhsw" podUID="77d27265-0497-40a5-a7ec-8f874b3611ea" May 27 03:20:15.351685 containerd[1881]: time="2025-05-27T03:20:15.351635053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:15.352678 containerd[1881]: time="2025-05-27T03:20:15.352555831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:20:15.354066 containerd[1881]: time="2025-05-27T03:20:15.354030647Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:15.356239 containerd[1881]: time="2025-05-27T03:20:15.356177533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:15.356924 containerd[1881]: time="2025-05-27T03:20:15.356755432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.455130917s" May 27 03:20:15.356924 containerd[1881]: time="2025-05-27T03:20:15.356791993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:20:15.358797 containerd[1881]: time="2025-05-27T03:20:15.358758882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:20:15.378043 containerd[1881]: time="2025-05-27T03:20:15.378008808Z" level=info msg="CreateContainer within sandbox \"eb1f3a232f26c3780e7b6d2edc747fbe645992b08a75355a025f05175d8776fe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:20:15.386774 containerd[1881]: time="2025-05-27T03:20:15.386734940Z" level=info msg="Container 52f9464b9bb45a63e5a795c52d1acdacc3b35c8bb80193c537154667405e2841: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:15.393543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3368552945.mount: Deactivated successfully. May 27 03:20:15.398827 containerd[1881]: time="2025-05-27T03:20:15.398784412Z" level=info msg="CreateContainer within sandbox \"eb1f3a232f26c3780e7b6d2edc747fbe645992b08a75355a025f05175d8776fe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"52f9464b9bb45a63e5a795c52d1acdacc3b35c8bb80193c537154667405e2841\"" May 27 03:20:15.399637 containerd[1881]: time="2025-05-27T03:20:15.399601025Z" level=info msg="StartContainer for \"52f9464b9bb45a63e5a795c52d1acdacc3b35c8bb80193c537154667405e2841\"" May 27 03:20:15.403513 containerd[1881]: time="2025-05-27T03:20:15.402622042Z" level=info msg="connecting to shim 52f9464b9bb45a63e5a795c52d1acdacc3b35c8bb80193c537154667405e2841" address="unix:///run/containerd/s/262d286ad8559960ea9772cf5fd0067aabc37a62aca4380536e7451535316001" protocol=ttrpc version=3 May 27 03:20:15.450277 systemd[1]: Started cri-containerd-52f9464b9bb45a63e5a795c52d1acdacc3b35c8bb80193c537154667405e2841.scope - libcontainer container 52f9464b9bb45a63e5a795c52d1acdacc3b35c8bb80193c537154667405e2841. May 27 03:20:15.512023 containerd[1881]: time="2025-05-27T03:20:15.511965406Z" level=info msg="StartContainer for \"52f9464b9bb45a63e5a795c52d1acdacc3b35c8bb80193c537154667405e2841\" returns successfully" May 27 03:20:16.101339 kubelet[3223]: I0527 03:20:16.100681 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78f6cd676c-n4qz7" podStartSLOduration=1.643324622 podStartE2EDuration="4.100664557s" podCreationTimestamp="2025-05-27 03:20:12 +0000 UTC" firstStartedPulling="2025-05-27 03:20:12.900523282 +0000 UTC m=+22.120771965" lastFinishedPulling="2025-05-27 03:20:15.35786322 +0000 UTC m=+24.578111900" observedRunningTime="2025-05-27 03:20:16.099201482 +0000 UTC m=+25.319450181" watchObservedRunningTime="2025-05-27 03:20:16.100664557 +0000 UTC m=+25.320913256" May 27 03:20:16.113246 kubelet[3223]: E0527 03:20:16.113201 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.113246 kubelet[3223]: W0527 03:20:16.113238 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.113454 kubelet[3223]: E0527 03:20:16.113261 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.113522 kubelet[3223]: E0527 03:20:16.113503 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.113522 kubelet[3223]: W0527 03:20:16.113517 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.113589 kubelet[3223]: E0527 03:20:16.113528 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.113811 kubelet[3223]: E0527 03:20:16.113697 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.113811 kubelet[3223]: W0527 03:20:16.113713 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.113811 kubelet[3223]: E0527 03:20:16.113729 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.114176 kubelet[3223]: E0527 03:20:16.114137 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.114176 kubelet[3223]: W0527 03:20:16.114149 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.115083 kubelet[3223]: E0527 03:20:16.114892 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.116717 kubelet[3223]: E0527 03:20:16.116669 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.116905 kubelet[3223]: W0527 03:20:16.116891 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.117156 kubelet[3223]: E0527 03:20:16.116971 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.117442 kubelet[3223]: E0527 03:20:16.117277 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.117604 kubelet[3223]: W0527 03:20:16.117516 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.117604 kubelet[3223]: E0527 03:20:16.117535 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.118030 kubelet[3223]: E0527 03:20:16.117823 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.118030 kubelet[3223]: W0527 03:20:16.117833 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.118030 kubelet[3223]: E0527 03:20:16.117842 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.119086 kubelet[3223]: E0527 03:20:16.119030 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.119086 kubelet[3223]: W0527 03:20:16.119042 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.119086 kubelet[3223]: E0527 03:20:16.119052 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.119430 kubelet[3223]: E0527 03:20:16.119359 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.119430 kubelet[3223]: W0527 03:20:16.119368 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.119430 kubelet[3223]: E0527 03:20:16.119377 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.119671 kubelet[3223]: E0527 03:20:16.119599 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.119671 kubelet[3223]: W0527 03:20:16.119608 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.119671 kubelet[3223]: E0527 03:20:16.119616 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.119921 kubelet[3223]: E0527 03:20:16.119849 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.119921 kubelet[3223]: W0527 03:20:16.119857 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.119921 kubelet[3223]: E0527 03:20:16.119867 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.120180 kubelet[3223]: E0527 03:20:16.120168 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.120528 kubelet[3223]: W0527 03:20:16.120516 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.120657 kubelet[3223]: E0527 03:20:16.120584 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.120776 kubelet[3223]: E0527 03:20:16.120768 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.120883 kubelet[3223]: W0527 03:20:16.120797 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.120883 kubelet[3223]: E0527 03:20:16.120806 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.121082 kubelet[3223]: E0527 03:20:16.121018 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.121082 kubelet[3223]: W0527 03:20:16.121026 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.121082 kubelet[3223]: E0527 03:20:16.121034 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.121545 kubelet[3223]: E0527 03:20:16.121464 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.121545 kubelet[3223]: W0527 03:20:16.121474 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.121545 kubelet[3223]: E0527 03:20:16.121484 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.150917 kubelet[3223]: E0527 03:20:16.150881 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.150917 kubelet[3223]: W0527 03:20:16.150907 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.151218 kubelet[3223]: E0527 03:20:16.150931 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.151218 kubelet[3223]: E0527 03:20:16.151211 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.151331 kubelet[3223]: W0527 03:20:16.151280 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.151331 kubelet[3223]: E0527 03:20:16.151304 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.151565 kubelet[3223]: E0527 03:20:16.151547 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.151565 kubelet[3223]: W0527 03:20:16.151561 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.151664 kubelet[3223]: E0527 03:20:16.151591 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.151863 kubelet[3223]: E0527 03:20:16.151845 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.151863 kubelet[3223]: W0527 03:20:16.151859 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.152030 kubelet[3223]: E0527 03:20:16.151877 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.152128 kubelet[3223]: E0527 03:20:16.152083 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.152128 kubelet[3223]: W0527 03:20:16.152095 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.152128 kubelet[3223]: E0527 03:20:16.152113 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.152321 kubelet[3223]: E0527 03:20:16.152305 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.152321 kubelet[3223]: W0527 03:20:16.152315 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.152542 kubelet[3223]: E0527 03:20:16.152333 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.152598 kubelet[3223]: E0527 03:20:16.152548 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.152598 kubelet[3223]: W0527 03:20:16.152558 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.152598 kubelet[3223]: E0527 03:20:16.152577 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.153212 kubelet[3223]: E0527 03:20:16.153190 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.153293 kubelet[3223]: W0527 03:20:16.153205 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.153366 kubelet[3223]: E0527 03:20:16.153305 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.153527 kubelet[3223]: E0527 03:20:16.153510 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.153527 kubelet[3223]: W0527 03:20:16.153524 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.153626 kubelet[3223]: E0527 03:20:16.153610 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.153817 kubelet[3223]: E0527 03:20:16.153785 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.153817 kubelet[3223]: W0527 03:20:16.153800 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.153920 kubelet[3223]: E0527 03:20:16.153845 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.154168 kubelet[3223]: E0527 03:20:16.154139 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.154168 kubelet[3223]: W0527 03:20:16.154162 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.154271 kubelet[3223]: E0527 03:20:16.154193 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.154492 kubelet[3223]: E0527 03:20:16.154474 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.154492 kubelet[3223]: W0527 03:20:16.154489 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.154597 kubelet[3223]: E0527 03:20:16.154518 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.154796 kubelet[3223]: E0527 03:20:16.154779 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.154796 kubelet[3223]: W0527 03:20:16.154793 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.154894 kubelet[3223]: E0527 03:20:16.154820 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.155096 kubelet[3223]: E0527 03:20:16.155052 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.155096 kubelet[3223]: W0527 03:20:16.155066 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.155201 kubelet[3223]: E0527 03:20:16.155099 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.155369 kubelet[3223]: E0527 03:20:16.155348 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.155369 kubelet[3223]: W0527 03:20:16.155363 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.155458 kubelet[3223]: E0527 03:20:16.155375 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.155620 kubelet[3223]: E0527 03:20:16.155599 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.155620 kubelet[3223]: W0527 03:20:16.155613 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.155708 kubelet[3223]: E0527 03:20:16.155625 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.156341 kubelet[3223]: E0527 03:20:16.155954 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.156341 kubelet[3223]: W0527 03:20:16.155963 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.156341 kubelet[3223]: E0527 03:20:16.155979 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.156341 kubelet[3223]: E0527 03:20:16.156269 3223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:16.156341 kubelet[3223]: W0527 03:20:16.156279 3223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:16.156341 kubelet[3223]: E0527 03:20:16.156289 3223 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:16.552643 containerd[1881]: time="2025-05-27T03:20:16.552501945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:16.554921 containerd[1881]: time="2025-05-27T03:20:16.554247027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:20:16.555849 containerd[1881]: time="2025-05-27T03:20:16.555671343Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:16.563233 containerd[1881]: time="2025-05-27T03:20:16.563169076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:16.564359 containerd[1881]: time="2025-05-27T03:20:16.564226100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.205130195s" May 27 03:20:16.564359 containerd[1881]: time="2025-05-27T03:20:16.564267119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:20:16.567821 containerd[1881]: time="2025-05-27T03:20:16.567779793Z" level=info msg="CreateContainer within sandbox \"9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:20:16.604093 containerd[1881]: time="2025-05-27T03:20:16.603217091Z" level=info msg="Container 914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:16.612846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778129046.mount: Deactivated successfully. May 27 03:20:16.628521 containerd[1881]: time="2025-05-27T03:20:16.628465061Z" level=info msg="CreateContainer within sandbox \"9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d\"" May 27 03:20:16.631191 containerd[1881]: time="2025-05-27T03:20:16.630705870Z" level=info msg="StartContainer for \"914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d\"" May 27 03:20:16.635495 containerd[1881]: time="2025-05-27T03:20:16.635447301Z" level=info msg="connecting to shim 914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d" address="unix:///run/containerd/s/14d6ae509b6ede3b514f13eecbccd197dca94e4da6a421c6ad388ed86f861d0c" protocol=ttrpc version=3 May 27 03:20:16.685908 systemd[1]: Started cri-containerd-914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d.scope - libcontainer container 914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d. May 27 03:20:16.772573 containerd[1881]: time="2025-05-27T03:20:16.772531984Z" level=info msg="StartContainer for \"914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d\" returns successfully" May 27 03:20:16.780014 systemd[1]: cri-containerd-914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d.scope: Deactivated successfully. May 27 03:20:16.890925 containerd[1881]: time="2025-05-27T03:20:16.890786382Z" level=info msg="received exit event container_id:\"914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d\" id:\"914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d\" pid:4180 exited_at:{seconds:1748316016 nanos:787482683}" May 27 03:20:16.891701 containerd[1881]: time="2025-05-27T03:20:16.891656905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d\" id:\"914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d\" pid:4180 exited_at:{seconds:1748316016 nanos:787482683}" May 27 03:20:16.957108 kubelet[3223]: E0527 03:20:16.956305 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zhsw" podUID="77d27265-0497-40a5-a7ec-8f874b3611ea" May 27 03:20:16.964277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-914d8e4f88858f6be51e5249c7896eef8cce7bb1ffe1110d522868d655c95f4d-rootfs.mount: Deactivated successfully. May 27 03:20:17.099586 containerd[1881]: time="2025-05-27T03:20:17.099422798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:20:18.952260 kubelet[3223]: E0527 03:20:18.952068 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zhsw" podUID="77d27265-0497-40a5-a7ec-8f874b3611ea" May 27 03:20:20.276671 containerd[1881]: time="2025-05-27T03:20:20.276604384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:20.277774 containerd[1881]: time="2025-05-27T03:20:20.277650634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:20:20.278851 containerd[1881]: time="2025-05-27T03:20:20.278814204Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:20.281830 containerd[1881]: time="2025-05-27T03:20:20.281133030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:20.281830 containerd[1881]: time="2025-05-27T03:20:20.281706456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.182050128s" May 27 03:20:20.281830 containerd[1881]: time="2025-05-27T03:20:20.281733179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:20:20.284039 containerd[1881]: time="2025-05-27T03:20:20.283992710Z" level=info msg="CreateContainer within sandbox \"9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:20:20.297313 containerd[1881]: time="2025-05-27T03:20:20.297276039Z" level=info msg="Container 33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:20.312454 containerd[1881]: time="2025-05-27T03:20:20.312403712Z" level=info msg="CreateContainer within sandbox \"9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e\"" May 27 03:20:20.314126 containerd[1881]: time="2025-05-27T03:20:20.313201361Z" level=info msg="StartContainer for \"33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e\"" May 27 03:20:20.314906 containerd[1881]: time="2025-05-27T03:20:20.314873532Z" level=info msg="connecting to shim 33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e" address="unix:///run/containerd/s/14d6ae509b6ede3b514f13eecbccd197dca94e4da6a421c6ad388ed86f861d0c" protocol=ttrpc version=3 May 27 03:20:20.343261 systemd[1]: Started cri-containerd-33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e.scope - libcontainer container 33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e. May 27 03:20:20.404280 containerd[1881]: time="2025-05-27T03:20:20.404245486Z" level=info msg="StartContainer for \"33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e\" returns successfully" May 27 03:20:20.951278 kubelet[3223]: E0527 03:20:20.951218 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zhsw" podUID="77d27265-0497-40a5-a7ec-8f874b3611ea" May 27 03:20:21.534903 systemd[1]: cri-containerd-33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e.scope: Deactivated successfully. May 27 03:20:21.535304 systemd[1]: cri-containerd-33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e.scope: Consumed 570ms CPU time, 161.6M memory peak, 13.4M read from disk, 170.9M written to disk. May 27 03:20:21.546405 containerd[1881]: time="2025-05-27T03:20:21.545691437Z" level=info msg="received exit event container_id:\"33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e\" id:\"33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e\" pid:4237 exited_at:{seconds:1748316021 nanos:545414265}" May 27 03:20:21.546405 containerd[1881]: time="2025-05-27T03:20:21.546213458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e\" id:\"33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e\" pid:4237 exited_at:{seconds:1748316021 nanos:545414265}" May 27 03:20:21.626444 kubelet[3223]: I0527 03:20:21.626393 3223 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 03:20:21.648650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33beb43ee74347bcc0e49494020a241b7342325821b00009df674a069cf0f49e-rootfs.mount: Deactivated successfully. May 27 03:20:21.739392 systemd[1]: Created slice kubepods-burstable-pod7de6b694_89fa_4461_9951_cd87b692bf22.slice - libcontainer container kubepods-burstable-pod7de6b694_89fa_4461_9951_cd87b692bf22.slice. May 27 03:20:21.749668 systemd[1]: Created slice kubepods-burstable-pod62a9d7fb_b029_4eeb_bbd7_6774fa0d0974.slice - libcontainer container kubepods-burstable-pod62a9d7fb_b029_4eeb_bbd7_6774fa0d0974.slice. May 27 03:20:21.765811 systemd[1]: Created slice kubepods-besteffort-podee378e2a_6923_47d9_907b_991e204e4fb5.slice - libcontainer container kubepods-besteffort-podee378e2a_6923_47d9_907b_991e204e4fb5.slice. May 27 03:20:21.775718 systemd[1]: Created slice kubepods-besteffort-podda64d26a_0dc2_4c54_b72b_53c0531d11be.slice - libcontainer container kubepods-besteffort-podda64d26a_0dc2_4c54_b72b_53c0531d11be.slice. May 27 03:20:21.788119 systemd[1]: Created slice kubepods-besteffort-podbf9ca194_f3ff_4b3d_a182_dced4faeca72.slice - libcontainer container kubepods-besteffort-podbf9ca194_f3ff_4b3d_a182_dced4faeca72.slice. May 27 03:20:21.809151 systemd[1]: Created slice kubepods-besteffort-pod4bb469da_f9bf_4f89_a896_e1bdeeceda6d.slice - libcontainer container kubepods-besteffort-pod4bb469da_f9bf_4f89_a896_e1bdeeceda6d.slice. May 27 03:20:21.829093 kubelet[3223]: I0527 03:20:21.828264 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-backend-key-pair\") pod \"whisker-564dd777b4-ttwlc\" (UID: \"da64d26a-0dc2-4c54-b72b-53c0531d11be\") " pod="calico-system/whisker-564dd777b4-ttwlc" May 27 03:20:21.829093 kubelet[3223]: I0527 03:20:21.828311 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-ca-bundle\") pod \"whisker-564dd777b4-ttwlc\" (UID: \"da64d26a-0dc2-4c54-b72b-53c0531d11be\") " pod="calico-system/whisker-564dd777b4-ttwlc" May 27 03:20:21.829093 kubelet[3223]: I0527 03:20:21.828342 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z42lb\" (UniqueName: \"kubernetes.io/projected/da64d26a-0dc2-4c54-b72b-53c0531d11be-kube-api-access-z42lb\") pod \"whisker-564dd777b4-ttwlc\" (UID: \"da64d26a-0dc2-4c54-b72b-53c0531d11be\") " pod="calico-system/whisker-564dd777b4-ttwlc" May 27 03:20:21.829093 kubelet[3223]: I0527 03:20:21.828387 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bb469da-f9bf-4f89-a896-e1bdeeceda6d-config\") pod \"goldmane-8f77d7b6c-bwzz8\" (UID: \"4bb469da-f9bf-4f89-a896-e1bdeeceda6d\") " pod="calico-system/goldmane-8f77d7b6c-bwzz8" May 27 03:20:21.829093 kubelet[3223]: I0527 03:20:21.828421 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bb469da-f9bf-4f89-a896-e1bdeeceda6d-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-bwzz8\" (UID: \"4bb469da-f9bf-4f89-a896-e1bdeeceda6d\") " pod="calico-system/goldmane-8f77d7b6c-bwzz8" May 27 03:20:21.829456 kubelet[3223]: I0527 03:20:21.828452 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qch\" (UniqueName: \"kubernetes.io/projected/7de6b694-89fa-4461-9951-cd87b692bf22-kube-api-access-82qch\") pod \"coredns-7c65d6cfc9-4z6jd\" (UID: \"7de6b694-89fa-4461-9951-cd87b692bf22\") " pod="kube-system/coredns-7c65d6cfc9-4z6jd" May 27 03:20:21.829456 kubelet[3223]: I0527 03:20:21.828485 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgfl\" (UniqueName: \"kubernetes.io/projected/63cd9f6c-74e1-4a65-8a52-ed51845517e4-kube-api-access-pqgfl\") pod \"calico-apiserver-5b95dd4f5c-krkwc\" (UID: \"63cd9f6c-74e1-4a65-8a52-ed51845517e4\") " pod="calico-apiserver/calico-apiserver-5b95dd4f5c-krkwc" May 27 03:20:21.829456 kubelet[3223]: I0527 03:20:21.828512 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4bb469da-f9bf-4f89-a896-e1bdeeceda6d-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-bwzz8\" (UID: \"4bb469da-f9bf-4f89-a896-e1bdeeceda6d\") " pod="calico-system/goldmane-8f77d7b6c-bwzz8" May 27 03:20:21.829456 kubelet[3223]: I0527 03:20:21.828544 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee378e2a-6923-47d9-907b-991e204e4fb5-calico-apiserver-certs\") pod \"calico-apiserver-5b95dd4f5c-lmmvx\" (UID: \"ee378e2a-6923-47d9-907b-991e204e4fb5\") " pod="calico-apiserver/calico-apiserver-5b95dd4f5c-lmmvx" May 27 03:20:21.829456 kubelet[3223]: I0527 03:20:21.828589 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62a9d7fb-b029-4eeb-bbd7-6774fa0d0974-config-volume\") pod \"coredns-7c65d6cfc9-mfctx\" (UID: \"62a9d7fb-b029-4eeb-bbd7-6774fa0d0974\") " pod="kube-system/coredns-7c65d6cfc9-mfctx" May 27 03:20:21.830888 kubelet[3223]: I0527 03:20:21.830027 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de6b694-89fa-4461-9951-cd87b692bf22-config-volume\") pod \"coredns-7c65d6cfc9-4z6jd\" (UID: \"7de6b694-89fa-4461-9951-cd87b692bf22\") " pod="kube-system/coredns-7c65d6cfc9-4z6jd" May 27 03:20:21.830888 kubelet[3223]: I0527 03:20:21.830586 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zvr\" (UniqueName: \"kubernetes.io/projected/62a9d7fb-b029-4eeb-bbd7-6774fa0d0974-kube-api-access-29zvr\") pod \"coredns-7c65d6cfc9-mfctx\" (UID: \"62a9d7fb-b029-4eeb-bbd7-6774fa0d0974\") " pod="kube-system/coredns-7c65d6cfc9-mfctx" May 27 03:20:21.830888 kubelet[3223]: I0527 03:20:21.830673 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggznq\" (UniqueName: \"kubernetes.io/projected/4bb469da-f9bf-4f89-a896-e1bdeeceda6d-kube-api-access-ggznq\") pod \"goldmane-8f77d7b6c-bwzz8\" (UID: \"4bb469da-f9bf-4f89-a896-e1bdeeceda6d\") " pod="calico-system/goldmane-8f77d7b6c-bwzz8" May 27 03:20:21.830888 kubelet[3223]: I0527 03:20:21.830745 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9ca194-f3ff-4b3d-a182-dced4faeca72-tigera-ca-bundle\") pod \"calico-kube-controllers-5bfbc5bcd-5fzgz\" (UID: \"bf9ca194-f3ff-4b3d-a182-dced4faeca72\") " pod="calico-system/calico-kube-controllers-5bfbc5bcd-5fzgz" May 27 03:20:21.831197 kubelet[3223]: I0527 03:20:21.830971 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63cd9f6c-74e1-4a65-8a52-ed51845517e4-calico-apiserver-certs\") pod \"calico-apiserver-5b95dd4f5c-krkwc\" (UID: \"63cd9f6c-74e1-4a65-8a52-ed51845517e4\") " pod="calico-apiserver/calico-apiserver-5b95dd4f5c-krkwc" May 27 03:20:21.831197 kubelet[3223]: I0527 03:20:21.831136 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcwj\" (UniqueName: \"kubernetes.io/projected/ee378e2a-6923-47d9-907b-991e204e4fb5-kube-api-access-wmcwj\") pod \"calico-apiserver-5b95dd4f5c-lmmvx\" (UID: \"ee378e2a-6923-47d9-907b-991e204e4fb5\") " pod="calico-apiserver/calico-apiserver-5b95dd4f5c-lmmvx" May 27 03:20:21.831494 kubelet[3223]: I0527 03:20:21.831299 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gz4w\" (UniqueName: \"kubernetes.io/projected/bf9ca194-f3ff-4b3d-a182-dced4faeca72-kube-api-access-5gz4w\") pod \"calico-kube-controllers-5bfbc5bcd-5fzgz\" (UID: \"bf9ca194-f3ff-4b3d-a182-dced4faeca72\") " pod="calico-system/calico-kube-controllers-5bfbc5bcd-5fzgz" May 27 03:20:21.846367 systemd[1]: Created slice kubepods-besteffort-pod63cd9f6c_74e1_4a65_8a52_ed51845517e4.slice - libcontainer container kubepods-besteffort-pod63cd9f6c_74e1_4a65_8a52_ed51845517e4.slice. May 27 03:20:22.048429 containerd[1881]: time="2025-05-27T03:20:22.047540407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4z6jd,Uid:7de6b694-89fa-4461-9951-cd87b692bf22,Namespace:kube-system,Attempt:0,}" May 27 03:20:22.057328 containerd[1881]: time="2025-05-27T03:20:22.057283544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mfctx,Uid:62a9d7fb-b029-4eeb-bbd7-6774fa0d0974,Namespace:kube-system,Attempt:0,}" May 27 03:20:22.082998 containerd[1881]: time="2025-05-27T03:20:22.082808148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564dd777b4-ttwlc,Uid:da64d26a-0dc2-4c54-b72b-53c0531d11be,Namespace:calico-system,Attempt:0,}" May 27 03:20:22.095985 containerd[1881]: time="2025-05-27T03:20:22.095819223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-lmmvx,Uid:ee378e2a-6923-47d9-907b-991e204e4fb5,Namespace:calico-apiserver,Attempt:0,}" May 27 03:20:22.097394 containerd[1881]: time="2025-05-27T03:20:22.096281366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfbc5bcd-5fzgz,Uid:bf9ca194-f3ff-4b3d-a182-dced4faeca72,Namespace:calico-system,Attempt:0,}" May 27 03:20:22.138600 containerd[1881]: time="2025-05-27T03:20:22.138558395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bwzz8,Uid:4bb469da-f9bf-4f89-a896-e1bdeeceda6d,Namespace:calico-system,Attempt:0,}" May 27 03:20:22.152024 containerd[1881]: time="2025-05-27T03:20:22.151982201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-krkwc,Uid:63cd9f6c-74e1-4a65-8a52-ed51845517e4,Namespace:calico-apiserver,Attempt:0,}" May 27 03:20:22.186545 containerd[1881]: time="2025-05-27T03:20:22.186473996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:20:22.540664 containerd[1881]: time="2025-05-27T03:20:22.540595148Z" level=error msg="Failed to destroy network for sandbox \"6936586ec1af1096f85c5858e2b60b0c55590d7c3c6e6c0ad4fb7ef2b1709a4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.541375 containerd[1881]: time="2025-05-27T03:20:22.541320865Z" level=error msg="Failed to destroy network for sandbox \"2be2c82926675d0ca3b6609643bfd9f6e829e1eb3c7c2561a3d3b788575ed081\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.543169 containerd[1881]: time="2025-05-27T03:20:22.543121832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4z6jd,Uid:7de6b694-89fa-4461-9951-cd87b692bf22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6936586ec1af1096f85c5858e2b60b0c55590d7c3c6e6c0ad4fb7ef2b1709a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.543610 containerd[1881]: time="2025-05-27T03:20:22.543558133Z" level=error msg="Failed to destroy network for sandbox \"02c6715fec41658393eb5859499b861fa7e576b7ce66a2efe6fea17c240861d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.544772 containerd[1881]: time="2025-05-27T03:20:22.544248116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564dd777b4-ttwlc,Uid:da64d26a-0dc2-4c54-b72b-53c0531d11be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be2c82926675d0ca3b6609643bfd9f6e829e1eb3c7c2561a3d3b788575ed081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.547167 containerd[1881]: time="2025-05-27T03:20:22.547116310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-lmmvx,Uid:ee378e2a-6923-47d9-907b-991e204e4fb5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c6715fec41658393eb5859499b861fa7e576b7ce66a2efe6fea17c240861d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.552811 containerd[1881]: time="2025-05-27T03:20:22.552763484Z" level=error msg="Failed to destroy network for sandbox \"8b3bd1c8e4bc3d18d26bbfe41934f9c935ba03b894dbb23ef6afbe64ae9bdffb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.553162 containerd[1881]: time="2025-05-27T03:20:22.553001912Z" level=error msg="Failed to destroy network for sandbox \"7b3c8ab068f7dcacba0c79e31ddfec800c4195e08ad3fadc87c2a184adcefad9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.553424 kubelet[3223]: E0527 03:20:22.553379 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c6715fec41658393eb5859499b861fa7e576b7ce66a2efe6fea17c240861d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.554148 kubelet[3223]: E0527 03:20:22.553371 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6936586ec1af1096f85c5858e2b60b0c55590d7c3c6e6c0ad4fb7ef2b1709a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.555919 containerd[1881]: time="2025-05-27T03:20:22.555045316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mfctx,Uid:62a9d7fb-b029-4eeb-bbd7-6774fa0d0974,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b3bd1c8e4bc3d18d26bbfe41934f9c935ba03b894dbb23ef6afbe64ae9bdffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.556054 kubelet[3223]: E0527 03:20:22.555722 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6936586ec1af1096f85c5858e2b60b0c55590d7c3c6e6c0ad4fb7ef2b1709a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4z6jd" May 27 03:20:22.556054 kubelet[3223]: E0527 03:20:22.555766 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6936586ec1af1096f85c5858e2b60b0c55590d7c3c6e6c0ad4fb7ef2b1709a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4z6jd" May 27 03:20:22.556379 kubelet[3223]: E0527 03:20:22.556353 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c6715fec41658393eb5859499b861fa7e576b7ce66a2efe6fea17c240861d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-lmmvx" May 27 03:20:22.556487 kubelet[3223]: E0527 03:20:22.556468 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c6715fec41658393eb5859499b861fa7e576b7ce66a2efe6fea17c240861d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-lmmvx" May 27 03:20:22.557523 kubelet[3223]: E0527 03:20:22.557251 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b95dd4f5c-lmmvx_calico-apiserver(ee378e2a-6923-47d9-907b-991e204e4fb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b95dd4f5c-lmmvx_calico-apiserver(ee378e2a-6923-47d9-907b-991e204e4fb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02c6715fec41658393eb5859499b861fa7e576b7ce66a2efe6fea17c240861d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-lmmvx" podUID="ee378e2a-6923-47d9-907b-991e204e4fb5" May 27 03:20:22.557651 containerd[1881]: time="2025-05-27T03:20:22.557247180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-krkwc,Uid:63cd9f6c-74e1-4a65-8a52-ed51845517e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c8ab068f7dcacba0c79e31ddfec800c4195e08ad3fadc87c2a184adcefad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.559000 kubelet[3223]: E0527 03:20:22.558317 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4z6jd_kube-system(7de6b694-89fa-4461-9951-cd87b692bf22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4z6jd_kube-system(7de6b694-89fa-4461-9951-cd87b692bf22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6936586ec1af1096f85c5858e2b60b0c55590d7c3c6e6c0ad4fb7ef2b1709a4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4z6jd" podUID="7de6b694-89fa-4461-9951-cd87b692bf22" May 27 03:20:22.559000 kubelet[3223]: E0527 03:20:22.553455 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be2c82926675d0ca3b6609643bfd9f6e829e1eb3c7c2561a3d3b788575ed081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.559000 kubelet[3223]: E0527 03:20:22.558391 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be2c82926675d0ca3b6609643bfd9f6e829e1eb3c7c2561a3d3b788575ed081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-564dd777b4-ttwlc" May 27 03:20:22.559225 kubelet[3223]: E0527 03:20:22.558416 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be2c82926675d0ca3b6609643bfd9f6e829e1eb3c7c2561a3d3b788575ed081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-564dd777b4-ttwlc" May 27 03:20:22.559225 kubelet[3223]: E0527 03:20:22.558452 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-564dd777b4-ttwlc_calico-system(da64d26a-0dc2-4c54-b72b-53c0531d11be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-564dd777b4-ttwlc_calico-system(da64d26a-0dc2-4c54-b72b-53c0531d11be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2be2c82926675d0ca3b6609643bfd9f6e829e1eb3c7c2561a3d3b788575ed081\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-564dd777b4-ttwlc" podUID="da64d26a-0dc2-4c54-b72b-53c0531d11be" May 27 03:20:22.559225 kubelet[3223]: E0527 03:20:22.558558 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c8ab068f7dcacba0c79e31ddfec800c4195e08ad3fadc87c2a184adcefad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.559407 kubelet[3223]: E0527 03:20:22.558583 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c8ab068f7dcacba0c79e31ddfec800c4195e08ad3fadc87c2a184adcefad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-krkwc" May 27 03:20:22.559407 kubelet[3223]: E0527 03:20:22.558605 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3c8ab068f7dcacba0c79e31ddfec800c4195e08ad3fadc87c2a184adcefad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-krkwc" May 27 03:20:22.559407 kubelet[3223]: E0527 03:20:22.558637 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b95dd4f5c-krkwc_calico-apiserver(63cd9f6c-74e1-4a65-8a52-ed51845517e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b95dd4f5c-krkwc_calico-apiserver(63cd9f6c-74e1-4a65-8a52-ed51845517e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b3c8ab068f7dcacba0c79e31ddfec800c4195e08ad3fadc87c2a184adcefad9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-krkwc" podUID="63cd9f6c-74e1-4a65-8a52-ed51845517e4" May 27 03:20:22.559563 kubelet[3223]: E0527 03:20:22.558676 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b3bd1c8e4bc3d18d26bbfe41934f9c935ba03b894dbb23ef6afbe64ae9bdffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.559563 kubelet[3223]: E0527 03:20:22.558699 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b3bd1c8e4bc3d18d26bbfe41934f9c935ba03b894dbb23ef6afbe64ae9bdffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mfctx" May 27 03:20:22.559563 kubelet[3223]: E0527 03:20:22.558719 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b3bd1c8e4bc3d18d26bbfe41934f9c935ba03b894dbb23ef6afbe64ae9bdffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mfctx" May 27 03:20:22.559705 kubelet[3223]: E0527 03:20:22.558751 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mfctx_kube-system(62a9d7fb-b029-4eeb-bbd7-6774fa0d0974)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mfctx_kube-system(62a9d7fb-b029-4eeb-bbd7-6774fa0d0974)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b3bd1c8e4bc3d18d26bbfe41934f9c935ba03b894dbb23ef6afbe64ae9bdffb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mfctx" podUID="62a9d7fb-b029-4eeb-bbd7-6774fa0d0974" May 27 03:20:22.562730 containerd[1881]: time="2025-05-27T03:20:22.561661123Z" level=error msg="Failed to destroy network for sandbox \"95f744a6d91941cfbe14fb6af42db1fc7f66bda76b9ded2d4514db393ec22dea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.567161 containerd[1881]: time="2025-05-27T03:20:22.564901967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfbc5bcd-5fzgz,Uid:bf9ca194-f3ff-4b3d-a182-dced4faeca72,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f744a6d91941cfbe14fb6af42db1fc7f66bda76b9ded2d4514db393ec22dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.568475 kubelet[3223]: E0527 03:20:22.568434 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f744a6d91941cfbe14fb6af42db1fc7f66bda76b9ded2d4514db393ec22dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.568862 kubelet[3223]: E0527 03:20:22.568655 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f744a6d91941cfbe14fb6af42db1fc7f66bda76b9ded2d4514db393ec22dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bfbc5bcd-5fzgz" May 27 03:20:22.568862 kubelet[3223]: E0527 03:20:22.568734 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f744a6d91941cfbe14fb6af42db1fc7f66bda76b9ded2d4514db393ec22dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bfbc5bcd-5fzgz" May 27 03:20:22.568862 kubelet[3223]: E0527 03:20:22.568800 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bfbc5bcd-5fzgz_calico-system(bf9ca194-f3ff-4b3d-a182-dced4faeca72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bfbc5bcd-5fzgz_calico-system(bf9ca194-f3ff-4b3d-a182-dced4faeca72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95f744a6d91941cfbe14fb6af42db1fc7f66bda76b9ded2d4514db393ec22dea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bfbc5bcd-5fzgz" podUID="bf9ca194-f3ff-4b3d-a182-dced4faeca72" May 27 03:20:22.571168 containerd[1881]: time="2025-05-27T03:20:22.571125592Z" level=error msg="Failed to destroy network for sandbox \"67f082e5853d89e45b1603965108cefb9e5611a837b7b44c0c6b6a3dac48ea50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.572627 containerd[1881]: time="2025-05-27T03:20:22.572590416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bwzz8,Uid:4bb469da-f9bf-4f89-a896-e1bdeeceda6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f082e5853d89e45b1603965108cefb9e5611a837b7b44c0c6b6a3dac48ea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.572995 kubelet[3223]: E0527 03:20:22.572954 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f082e5853d89e45b1603965108cefb9e5611a837b7b44c0c6b6a3dac48ea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:22.573114 kubelet[3223]: E0527 03:20:22.573009 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f082e5853d89e45b1603965108cefb9e5611a837b7b44c0c6b6a3dac48ea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-bwzz8" May 27 03:20:22.573114 kubelet[3223]: E0527 03:20:22.573028 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f082e5853d89e45b1603965108cefb9e5611a837b7b44c0c6b6a3dac48ea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-bwzz8" May 27 03:20:22.574166 kubelet[3223]: E0527 03:20:22.573321 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-bwzz8_calico-system(4bb469da-f9bf-4f89-a896-e1bdeeceda6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-bwzz8_calico-system(4bb469da-f9bf-4f89-a896-e1bdeeceda6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67f082e5853d89e45b1603965108cefb9e5611a837b7b44c0c6b6a3dac48ea50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:20:22.969201 systemd[1]: Created slice kubepods-besteffort-pod77d27265_0497_40a5_a7ec_8f874b3611ea.slice - libcontainer container kubepods-besteffort-pod77d27265_0497_40a5_a7ec_8f874b3611ea.slice. May 27 03:20:22.979596 containerd[1881]: time="2025-05-27T03:20:22.979551350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zhsw,Uid:77d27265-0497-40a5-a7ec-8f874b3611ea,Namespace:calico-system,Attempt:0,}" May 27 03:20:23.062223 containerd[1881]: time="2025-05-27T03:20:23.062159647Z" level=error msg="Failed to destroy network for sandbox \"1d0fad0e1d46f0e1493fc3c734d697c9e72dd76d36ae1521fbc38bd9ec12afb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:23.065035 containerd[1881]: time="2025-05-27T03:20:23.064979275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zhsw,Uid:77d27265-0497-40a5-a7ec-8f874b3611ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0fad0e1d46f0e1493fc3c734d697c9e72dd76d36ae1521fbc38bd9ec12afb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:23.066796 kubelet[3223]: E0527 03:20:23.065283 3223 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0fad0e1d46f0e1493fc3c734d697c9e72dd76d36ae1521fbc38bd9ec12afb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:23.066796 kubelet[3223]: E0527 03:20:23.065350 3223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0fad0e1d46f0e1493fc3c734d697c9e72dd76d36ae1521fbc38bd9ec12afb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5zhsw" May 27 03:20:23.066796 kubelet[3223]: E0527 03:20:23.065377 3223 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0fad0e1d46f0e1493fc3c734d697c9e72dd76d36ae1521fbc38bd9ec12afb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5zhsw" May 27 03:20:23.066066 systemd[1]: run-netns-cni\x2d6be2747f\x2d18f0\x2d1618\x2d7839\x2d169a69fbbe56.mount: Deactivated successfully. May 27 03:20:23.067048 kubelet[3223]: E0527 03:20:23.065444 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5zhsw_calico-system(77d27265-0497-40a5-a7ec-8f874b3611ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5zhsw_calico-system(77d27265-0497-40a5-a7ec-8f874b3611ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0fad0e1d46f0e1493fc3c734d697c9e72dd76d36ae1521fbc38bd9ec12afb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5zhsw" podUID="77d27265-0497-40a5-a7ec-8f874b3611ea" May 27 03:20:29.034791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363606641.mount: Deactivated successfully. May 27 03:20:29.115272 containerd[1881]: time="2025-05-27T03:20:29.115217138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:29.129766 containerd[1881]: time="2025-05-27T03:20:29.129716432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:20:29.149890 containerd[1881]: time="2025-05-27T03:20:29.149649074Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:29.153419 containerd[1881]: time="2025-05-27T03:20:29.153369286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:29.157917 containerd[1881]: time="2025-05-27T03:20:29.157851974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.970143671s" May 27 03:20:29.157917 containerd[1881]: time="2025-05-27T03:20:29.157901037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:20:29.183824 containerd[1881]: time="2025-05-27T03:20:29.183781401Z" level=info msg="CreateContainer within sandbox \"9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:20:29.228135 containerd[1881]: time="2025-05-27T03:20:29.227976071Z" level=info msg="Container 40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:29.233780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726819977.mount: Deactivated successfully. May 27 03:20:29.290581 containerd[1881]: time="2025-05-27T03:20:29.290440018Z" level=info msg="CreateContainer within sandbox \"9d6df5e20faf5e5c552c658073632b8014f6fbd4a06e5c07478307ed865c8c7f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\"" May 27 03:20:29.291277 containerd[1881]: time="2025-05-27T03:20:29.291240401Z" level=info msg="StartContainer for \"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\"" May 27 03:20:29.295312 containerd[1881]: time="2025-05-27T03:20:29.295242660Z" level=info msg="connecting to shim 40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e" address="unix:///run/containerd/s/14d6ae509b6ede3b514f13eecbccd197dca94e4da6a421c6ad388ed86f861d0c" protocol=ttrpc version=3 May 27 03:20:29.447301 systemd[1]: Started cri-containerd-40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e.scope - libcontainer container 40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e. May 27 03:20:29.519883 containerd[1881]: time="2025-05-27T03:20:29.519843158Z" level=info msg="StartContainer for \"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\" returns successfully" May 27 03:20:29.700539 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:20:29.701276 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:20:30.102402 kubelet[3223]: I0527 03:20:30.102342 3223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-ca-bundle\") pod \"da64d26a-0dc2-4c54-b72b-53c0531d11be\" (UID: \"da64d26a-0dc2-4c54-b72b-53c0531d11be\") " May 27 03:20:30.103881 kubelet[3223]: I0527 03:20:30.103481 3223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z42lb\" (UniqueName: \"kubernetes.io/projected/da64d26a-0dc2-4c54-b72b-53c0531d11be-kube-api-access-z42lb\") pod \"da64d26a-0dc2-4c54-b72b-53c0531d11be\" (UID: \"da64d26a-0dc2-4c54-b72b-53c0531d11be\") " May 27 03:20:30.103881 kubelet[3223]: I0527 03:20:30.103525 3223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-backend-key-pair\") pod \"da64d26a-0dc2-4c54-b72b-53c0531d11be\" (UID: \"da64d26a-0dc2-4c54-b72b-53c0531d11be\") " May 27 03:20:30.104031 kubelet[3223]: I0527 03:20:30.102881 3223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "da64d26a-0dc2-4c54-b72b-53c0531d11be" (UID: "da64d26a-0dc2-4c54-b72b-53c0531d11be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 03:20:30.126257 kubelet[3223]: I0527 03:20:30.126158 3223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da64d26a-0dc2-4c54-b72b-53c0531d11be-kube-api-access-z42lb" (OuterVolumeSpecName: "kube-api-access-z42lb") pod "da64d26a-0dc2-4c54-b72b-53c0531d11be" (UID: "da64d26a-0dc2-4c54-b72b-53c0531d11be"). InnerVolumeSpecName "kube-api-access-z42lb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 03:20:30.127983 systemd[1]: var-lib-kubelet-pods-da64d26a\x2d0dc2\x2d4c54\x2db72b\x2d53c0531d11be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:20:30.128528 kubelet[3223]: I0527 03:20:30.128124 3223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "da64d26a-0dc2-4c54-b72b-53c0531d11be" (UID: "da64d26a-0dc2-4c54-b72b-53c0531d11be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 03:20:30.134190 systemd[1]: var-lib-kubelet-pods-da64d26a\x2d0dc2\x2d4c54\x2db72b\x2d53c0531d11be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz42lb.mount: Deactivated successfully. May 27 03:20:30.204579 kubelet[3223]: I0527 03:20:30.204538 3223 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z42lb\" (UniqueName: \"kubernetes.io/projected/da64d26a-0dc2-4c54-b72b-53c0531d11be-kube-api-access-z42lb\") on node \"ip-172-31-29-86\" DevicePath \"\"" May 27 03:20:30.204579 kubelet[3223]: I0527 03:20:30.204571 3223 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-backend-key-pair\") on node \"ip-172-31-29-86\" DevicePath \"\"" May 27 03:20:30.204579 kubelet[3223]: I0527 03:20:30.204581 3223 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da64d26a-0dc2-4c54-b72b-53c0531d11be-whisker-ca-bundle\") on node \"ip-172-31-29-86\" DevicePath \"\"" May 27 03:20:30.285496 systemd[1]: Removed slice kubepods-besteffort-podda64d26a_0dc2_4c54_b72b_53c0531d11be.slice - libcontainer container kubepods-besteffort-podda64d26a_0dc2_4c54_b72b_53c0531d11be.slice. May 27 03:20:30.305501 kubelet[3223]: I0527 03:20:30.303690 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rcpc9" podStartSLOduration=2.27132928 podStartE2EDuration="18.303658361s" podCreationTimestamp="2025-05-27 03:20:12 +0000 UTC" firstStartedPulling="2025-05-27 03:20:13.130191449 +0000 UTC m=+22.350440136" lastFinishedPulling="2025-05-27 03:20:29.162520538 +0000 UTC m=+38.382769217" observedRunningTime="2025-05-27 03:20:30.302011502 +0000 UTC m=+39.522260200" watchObservedRunningTime="2025-05-27 03:20:30.303658361 +0000 UTC m=+39.523907059" May 27 03:20:30.418491 systemd[1]: Created slice kubepods-besteffort-pod594fafad_e868_4ec4_9b96_4b356334c568.slice - libcontainer container kubepods-besteffort-pod594fafad_e868_4ec4_9b96_4b356334c568.slice. May 27 03:20:30.507056 kubelet[3223]: I0527 03:20:30.507003 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/594fafad-e868-4ec4-9b96-4b356334c568-whisker-ca-bundle\") pod \"whisker-59cffdd558-mvblc\" (UID: \"594fafad-e868-4ec4-9b96-4b356334c568\") " pod="calico-system/whisker-59cffdd558-mvblc" May 27 03:20:30.507056 kubelet[3223]: I0527 03:20:30.507050 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9jm\" (UniqueName: \"kubernetes.io/projected/594fafad-e868-4ec4-9b96-4b356334c568-kube-api-access-tf9jm\") pod \"whisker-59cffdd558-mvblc\" (UID: \"594fafad-e868-4ec4-9b96-4b356334c568\") " pod="calico-system/whisker-59cffdd558-mvblc" May 27 03:20:30.507256 kubelet[3223]: I0527 03:20:30.507094 3223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/594fafad-e868-4ec4-9b96-4b356334c568-whisker-backend-key-pair\") pod \"whisker-59cffdd558-mvblc\" (UID: \"594fafad-e868-4ec4-9b96-4b356334c568\") " pod="calico-system/whisker-59cffdd558-mvblc" May 27 03:20:30.722891 containerd[1881]: time="2025-05-27T03:20:30.722765892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cffdd558-mvblc,Uid:594fafad-e868-4ec4-9b96-4b356334c568,Namespace:calico-system,Attempt:0,}" May 27 03:20:30.953263 kubelet[3223]: I0527 03:20:30.953223 3223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da64d26a-0dc2-4c54-b72b-53c0531d11be" path="/var/lib/kubelet/pods/da64d26a-0dc2-4c54-b72b-53c0531d11be/volumes" May 27 03:20:31.363391 (udev-worker)[4544]: Network interface NamePolicy= disabled on kernel command line. May 27 03:20:31.367007 systemd-networkd[1721]: cali6899d06d3f3: Link UP May 27 03:20:31.367265 systemd-networkd[1721]: cali6899d06d3f3: Gained carrier May 27 03:20:31.404428 containerd[1881]: 2025-05-27 03:20:30.770 [INFO][4571] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:20:31.404428 containerd[1881]: 2025-05-27 03:20:30.827 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0 whisker-59cffdd558- calico-system 594fafad-e868-4ec4-9b96-4b356334c568 865 0 2025-05-27 03:20:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59cffdd558 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-86 whisker-59cffdd558-mvblc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6899d06d3f3 [] [] }} ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-" May 27 03:20:31.404428 containerd[1881]: 2025-05-27 03:20:30.828 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" May 27 03:20:31.404428 containerd[1881]: 2025-05-27 03:20:31.231 [INFO][4582] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" HandleID="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Workload="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.233 [INFO][4582] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" HandleID="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Workload="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038d040), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-86", "pod":"whisker-59cffdd558-mvblc", "timestamp":"2025-05-27 03:20:31.23135211 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.233 [INFO][4582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.233 [INFO][4582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.233 [INFO][4582] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.256 [INFO][4582] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" host="ip-172-31-29-86" May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.274 [INFO][4582] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.281 [INFO][4582] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.284 [INFO][4582] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.287 [INFO][4582] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:31.404779 containerd[1881]: 2025-05-27 03:20:31.288 [INFO][4582] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" host="ip-172-31-29-86" May 27 03:20:31.405221 containerd[1881]: 2025-05-27 03:20:31.297 [INFO][4582] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5 May 27 03:20:31.405221 containerd[1881]: 2025-05-27 03:20:31.313 [INFO][4582] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" host="ip-172-31-29-86" May 27 03:20:31.405221 containerd[1881]: 2025-05-27 03:20:31.322 [INFO][4582] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.1/26] block=192.168.31.0/26 handle="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" host="ip-172-31-29-86" May 27 03:20:31.405221 containerd[1881]: 2025-05-27 03:20:31.322 [INFO][4582] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.1/26] handle="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" host="ip-172-31-29-86" May 27 03:20:31.405221 containerd[1881]: 2025-05-27 03:20:31.322 [INFO][4582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:31.405221 containerd[1881]: 2025-05-27 03:20:31.322 [INFO][4582] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.1/26] IPv6=[] ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" HandleID="k8s-pod-network.ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Workload="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" May 27 03:20:31.405432 containerd[1881]: 2025-05-27 03:20:31.332 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0", GenerateName:"whisker-59cffdd558-", Namespace:"calico-system", SelfLink:"", UID:"594fafad-e868-4ec4-9b96-4b356334c568", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cffdd558", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"whisker-59cffdd558-mvblc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6899d06d3f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:31.405432 containerd[1881]: 2025-05-27 03:20:31.332 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.1/32] ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" May 27 03:20:31.405576 containerd[1881]: 2025-05-27 03:20:31.332 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6899d06d3f3 ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" May 27 03:20:31.405576 containerd[1881]: 2025-05-27 03:20:31.361 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" May 27 03:20:31.405664 containerd[1881]: 2025-05-27 03:20:31.361 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0", GenerateName:"whisker-59cffdd558-", Namespace:"calico-system", SelfLink:"", UID:"594fafad-e868-4ec4-9b96-4b356334c568", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cffdd558", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5", Pod:"whisker-59cffdd558-mvblc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6899d06d3f3", MAC:"f6:bc:97:a3:12:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:31.405757 containerd[1881]: 2025-05-27 03:20:31.393 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" Namespace="calico-system" Pod="whisker-59cffdd558-mvblc" WorkloadEndpoint="ip--172--31--29--86-k8s-whisker--59cffdd558--mvblc-eth0" May 27 03:20:31.829730 containerd[1881]: time="2025-05-27T03:20:31.829644482Z" level=info msg="connecting to shim ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5" address="unix:///run/containerd/s/853b6f79c57fcca0ed99a2650153ee53d1922371e747ca92cc7157a4f81832e8" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:31.892546 systemd[1]: Started cri-containerd-ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5.scope - libcontainer container ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5. May 27 03:20:32.051232 containerd[1881]: time="2025-05-27T03:20:32.051187725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cffdd558-mvblc,Uid:594fafad-e868-4ec4-9b96-4b356334c568,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecab4e25f942e82aa8921009c24ebd814d143cb63beac9d8a1e04ba97e10c3f5\"" May 27 03:20:32.059730 containerd[1881]: time="2025-05-27T03:20:32.059417590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:20:32.137211 containerd[1881]: time="2025-05-27T03:20:32.137059996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\" id:\"226ca28f0b4ea6b3ca4c1e37256bb4cce16dac934ae3c28e9d022ea01bd301af\" pid:4700 exit_status:1 exited_at:{seconds:1748316032 nanos:133198947}" May 27 03:20:32.256224 containerd[1881]: time="2025-05-27T03:20:32.256101528Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:20:32.258479 containerd[1881]: time="2025-05-27T03:20:32.258438360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:20:32.261389 containerd[1881]: time="2025-05-27T03:20:32.258636383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:20:32.261970 kubelet[3223]: E0527 03:20:32.261832 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:20:32.261970 kubelet[3223]: E0527 03:20:32.261929 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:20:32.269014 kubelet[3223]: E0527 03:20:32.268929 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:434808263c7f46d09d8c48091d884722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:20:32.272401 containerd[1881]: time="2025-05-27T03:20:32.272235610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:20:32.451142 containerd[1881]: time="2025-05-27T03:20:32.451011320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\" id:\"11340bfb2cee15868986df952000e26304f8f9085d44f7453c773f9084237679\" pid:4804 exit_status:1 exited_at:{seconds:1748316032 nanos:450685848}" May 27 03:20:32.469039 containerd[1881]: time="2025-05-27T03:20:32.468975781Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:20:32.471467 containerd[1881]: time="2025-05-27T03:20:32.471376558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:20:32.471467 containerd[1881]: time="2025-05-27T03:20:32.471415876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:20:32.471972 kubelet[3223]: E0527 03:20:32.471934 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:20:32.471972 kubelet[3223]: E0527 03:20:32.472019 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:20:32.473834 kubelet[3223]: E0527 03:20:32.473159 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:20:32.474383 kubelet[3223]: E0527 03:20:32.474331 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:20:32.502282 systemd-networkd[1721]: vxlan.calico: Link UP May 27 03:20:32.502292 systemd-networkd[1721]: vxlan.calico: Gained carrier May 27 03:20:32.537568 (udev-worker)[4539]: Network interface NamePolicy= disabled on kernel command line. May 27 03:20:32.950857 containerd[1881]: time="2025-05-27T03:20:32.950778936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4z6jd,Uid:7de6b694-89fa-4461-9951-cd87b692bf22,Namespace:kube-system,Attempt:0,}" May 27 03:20:33.084231 systemd-networkd[1721]: caliced496df0e0: Link UP May 27 03:20:33.086114 systemd-networkd[1721]: caliced496df0e0: Gained carrier May 27 03:20:33.109123 containerd[1881]: 2025-05-27 03:20:33.008 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0 coredns-7c65d6cfc9- kube-system 7de6b694-89fa-4461-9951-cd87b692bf22 792 0 2025-05-27 03:19:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-86 coredns-7c65d6cfc9-4z6jd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliced496df0e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-" May 27 03:20:33.109123 containerd[1881]: 2025-05-27 03:20:33.008 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" May 27 03:20:33.109123 containerd[1881]: 2025-05-27 03:20:33.036 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" HandleID="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Workload="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.036 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" HandleID="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Workload="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235080), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-86", "pod":"coredns-7c65d6cfc9-4z6jd", "timestamp":"2025-05-27 03:20:33.036264076 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.036 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.036 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.036 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.044 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" host="ip-172-31-29-86" May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.049 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.054 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.056 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.059 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:33.109670 containerd[1881]: 2025-05-27 03:20:33.059 [INFO][4891] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" host="ip-172-31-29-86" May 27 03:20:33.110334 containerd[1881]: 2025-05-27 03:20:33.061 [INFO][4891] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e May 27 03:20:33.110334 containerd[1881]: 2025-05-27 03:20:33.066 [INFO][4891] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" host="ip-172-31-29-86" May 27 03:20:33.110334 containerd[1881]: 2025-05-27 03:20:33.075 [INFO][4891] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.2/26] block=192.168.31.0/26 handle="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" host="ip-172-31-29-86" May 27 03:20:33.110334 containerd[1881]: 2025-05-27 03:20:33.075 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.2/26] handle="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" host="ip-172-31-29-86" May 27 03:20:33.110334 containerd[1881]: 2025-05-27 03:20:33.075 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:33.110334 containerd[1881]: 2025-05-27 03:20:33.075 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.2/26] IPv6=[] ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" HandleID="k8s-pod-network.5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Workload="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" May 27 03:20:33.110905 containerd[1881]: 2025-05-27 03:20:33.080 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7de6b694-89fa-4461-9951-cd87b692bf22", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 19, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"coredns-7c65d6cfc9-4z6jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliced496df0e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:33.110905 containerd[1881]: 2025-05-27 03:20:33.080 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.2/32] ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" May 27 03:20:33.110905 containerd[1881]: 2025-05-27 03:20:33.080 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliced496df0e0 ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" May 27 03:20:33.110905 containerd[1881]: 2025-05-27 03:20:33.086 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" May 27 03:20:33.110905 containerd[1881]: 2025-05-27 03:20:33.087 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7de6b694-89fa-4461-9951-cd87b692bf22", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 19, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e", Pod:"coredns-7c65d6cfc9-4z6jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliced496df0e0", MAC:"b6:b2:18:ac:39:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:33.110905 containerd[1881]: 2025-05-27 03:20:33.103 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4z6jd" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--4z6jd-eth0" May 27 03:20:33.153978 containerd[1881]: time="2025-05-27T03:20:33.153935625Z" level=info msg="connecting to shim 5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e" address="unix:///run/containerd/s/ea9884353b76e98729b186acf836b29d21faf2f35eb5bb3560bc20fefcb38591" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:33.192333 systemd[1]: Started cri-containerd-5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e.scope - libcontainer container 5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e. May 27 03:20:33.249675 containerd[1881]: time="2025-05-27T03:20:33.249622248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4z6jd,Uid:7de6b694-89fa-4461-9951-cd87b692bf22,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e\"" May 27 03:20:33.255874 containerd[1881]: time="2025-05-27T03:20:33.255286332Z" level=info msg="CreateContainer within sandbox \"5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:20:33.269890 systemd-networkd[1721]: cali6899d06d3f3: Gained IPv6LL May 27 03:20:33.294762 containerd[1881]: time="2025-05-27T03:20:33.293253399Z" level=info msg="Container abc6160c5048965ad16a9a95a2877d370fb6804204de258b4a5e341a5c050f0c: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:33.293471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041628793.mount: Deactivated successfully. May 27 03:20:33.306565 kubelet[3223]: E0527 03:20:33.306435 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:20:33.308333 containerd[1881]: time="2025-05-27T03:20:33.308206612Z" level=info msg="CreateContainer within sandbox \"5ff9fa86d1555964dce1aa2819b94046510101eb9287b680c21f0885054d895e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abc6160c5048965ad16a9a95a2877d370fb6804204de258b4a5e341a5c050f0c\"" May 27 03:20:33.309176 containerd[1881]: time="2025-05-27T03:20:33.309110498Z" level=info msg="StartContainer for \"abc6160c5048965ad16a9a95a2877d370fb6804204de258b4a5e341a5c050f0c\"" May 27 03:20:33.310895 containerd[1881]: time="2025-05-27T03:20:33.310834365Z" level=info msg="connecting to shim abc6160c5048965ad16a9a95a2877d370fb6804204de258b4a5e341a5c050f0c" address="unix:///run/containerd/s/ea9884353b76e98729b186acf836b29d21faf2f35eb5bb3560bc20fefcb38591" protocol=ttrpc version=3 May 27 03:20:33.350597 systemd[1]: Started cri-containerd-abc6160c5048965ad16a9a95a2877d370fb6804204de258b4a5e341a5c050f0c.scope - libcontainer container abc6160c5048965ad16a9a95a2877d370fb6804204de258b4a5e341a5c050f0c. May 27 03:20:33.400345 containerd[1881]: time="2025-05-27T03:20:33.400303398Z" level=info msg="StartContainer for \"abc6160c5048965ad16a9a95a2877d370fb6804204de258b4a5e341a5c050f0c\" returns successfully" May 27 03:20:33.908420 systemd-networkd[1721]: vxlan.calico: Gained IPv6LL May 27 03:20:33.951395 containerd[1881]: time="2025-05-27T03:20:33.951354151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-lmmvx,Uid:ee378e2a-6923-47d9-907b-991e204e4fb5,Namespace:calico-apiserver,Attempt:0,}" May 27 03:20:33.951853 containerd[1881]: time="2025-05-27T03:20:33.951828966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-krkwc,Uid:63cd9f6c-74e1-4a65-8a52-ed51845517e4,Namespace:calico-apiserver,Attempt:0,}" May 27 03:20:33.969193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123718525.mount: Deactivated successfully. May 27 03:20:34.154942 systemd-networkd[1721]: calia25cac00a15: Link UP May 27 03:20:34.157505 systemd-networkd[1721]: calia25cac00a15: Gained carrier May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.035 [INFO][4995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0 calico-apiserver-5b95dd4f5c- calico-apiserver 63cd9f6c-74e1-4a65-8a52-ed51845517e4 801 0 2025-05-27 03:20:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b95dd4f5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-86 calico-apiserver-5b95dd4f5c-krkwc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia25cac00a15 [] [] }} ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.036 [INFO][4995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.071 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" HandleID="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Workload="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.073 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" HandleID="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Workload="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-86", "pod":"calico-apiserver-5b95dd4f5c-krkwc", "timestamp":"2025-05-27 03:20:34.071035484 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.073 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.074 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.074 [INFO][5014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.084 [INFO][5014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.094 [INFO][5014] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.099 [INFO][5014] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.107 [INFO][5014] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.116 [INFO][5014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.116 [INFO][5014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.121 [INFO][5014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791 May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.130 [INFO][5014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.144 [INFO][5014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.3/26] block=192.168.31.0/26 handle="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.144 [INFO][5014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.3/26] handle="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" host="ip-172-31-29-86" May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.144 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:34.203653 containerd[1881]: 2025-05-27 03:20:34.144 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.3/26] IPv6=[] ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" HandleID="k8s-pod-network.58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Workload="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" May 27 03:20:34.206919 containerd[1881]: 2025-05-27 03:20:34.147 [INFO][4995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0", GenerateName:"calico-apiserver-5b95dd4f5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63cd9f6c-74e1-4a65-8a52-ed51845517e4", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b95dd4f5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"calico-apiserver-5b95dd4f5c-krkwc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia25cac00a15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:34.206919 containerd[1881]: 2025-05-27 03:20:34.147 [INFO][4995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.3/32] ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" May 27 03:20:34.206919 containerd[1881]: 2025-05-27 03:20:34.147 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia25cac00a15 ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" May 27 03:20:34.206919 containerd[1881]: 2025-05-27 03:20:34.160 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" May 27 03:20:34.206919 containerd[1881]: 2025-05-27 03:20:34.161 [INFO][4995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0", GenerateName:"calico-apiserver-5b95dd4f5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63cd9f6c-74e1-4a65-8a52-ed51845517e4", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b95dd4f5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791", Pod:"calico-apiserver-5b95dd4f5c-krkwc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia25cac00a15", MAC:"c2:5e:65:bd:0c:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:34.206919 containerd[1881]: 2025-05-27 03:20:34.198 [INFO][4995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-krkwc" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--krkwc-eth0" May 27 03:20:34.268106 containerd[1881]: time="2025-05-27T03:20:34.267451570Z" level=info msg="connecting to shim 58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791" address="unix:///run/containerd/s/835a16c11f0c540a8fcb3e1c08974b7886ed5469d1844b936204df1ccf6c134f" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:34.337272 systemd[1]: Started cri-containerd-58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791.scope - libcontainer container 58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791. May 27 03:20:34.349657 systemd-networkd[1721]: califcc5ba9eb10: Link UP May 27 03:20:34.351199 systemd-networkd[1721]: califcc5ba9eb10: Gained carrier May 27 03:20:34.377179 kubelet[3223]: I0527 03:20:34.376603 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4z6jd" podStartSLOduration=37.37657488 podStartE2EDuration="37.37657488s" podCreationTimestamp="2025-05-27 03:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:20:34.355986546 +0000 UTC m=+43.576235245" watchObservedRunningTime="2025-05-27 03:20:34.37657488 +0000 UTC m=+43.596823579" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.047 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0 calico-apiserver-5b95dd4f5c- calico-apiserver ee378e2a-6923-47d9-907b-991e204e4fb5 796 0 2025-05-27 03:20:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b95dd4f5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-86 calico-apiserver-5b95dd4f5c-lmmvx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcc5ba9eb10 [] [] }} ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.047 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.089 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" HandleID="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Workload="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.089 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" HandleID="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Workload="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-86", "pod":"calico-apiserver-5b95dd4f5c-lmmvx", "timestamp":"2025-05-27 03:20:34.08912934 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.089 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.144 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.146 [INFO][5019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.206 [INFO][5019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.228 [INFO][5019] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.238 [INFO][5019] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.249 [INFO][5019] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.257 [INFO][5019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.257 [INFO][5019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.263 [INFO][5019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322 May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.304 [INFO][5019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.323 [INFO][5019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.4/26] block=192.168.31.0/26 handle="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.323 [INFO][5019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.4/26] handle="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" host="ip-172-31-29-86" May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.323 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:34.383954 containerd[1881]: 2025-05-27 03:20:34.323 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.4/26] IPv6=[] ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" HandleID="k8s-pod-network.a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Workload="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" May 27 03:20:34.385490 containerd[1881]: 2025-05-27 03:20:34.333 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0", GenerateName:"calico-apiserver-5b95dd4f5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee378e2a-6923-47d9-907b-991e204e4fb5", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b95dd4f5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"calico-apiserver-5b95dd4f5c-lmmvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcc5ba9eb10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:34.385490 containerd[1881]: 2025-05-27 03:20:34.333 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.4/32] ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" May 27 03:20:34.385490 containerd[1881]: 2025-05-27 03:20:34.333 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcc5ba9eb10 ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" May 27 03:20:34.385490 containerd[1881]: 2025-05-27 03:20:34.350 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" May 27 03:20:34.385490 containerd[1881]: 2025-05-27 03:20:34.351 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0", GenerateName:"calico-apiserver-5b95dd4f5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee378e2a-6923-47d9-907b-991e204e4fb5", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b95dd4f5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322", Pod:"calico-apiserver-5b95dd4f5c-lmmvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcc5ba9eb10", MAC:"06:70:3d:fb:b9:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:34.385490 containerd[1881]: 2025-05-27 03:20:34.377 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" Namespace="calico-apiserver" Pod="calico-apiserver-5b95dd4f5c-lmmvx" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--apiserver--5b95dd4f5c--lmmvx-eth0" May 27 03:20:34.462319 containerd[1881]: time="2025-05-27T03:20:34.462035929Z" level=info msg="connecting to shim a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322" address="unix:///run/containerd/s/4631ad3e3305819be0a0cbea5e2ef7532b97c273cc5aa838d46b8703089fa268" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:34.519655 systemd[1]: Started cri-containerd-a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322.scope - libcontainer container a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322. May 27 03:20:34.589009 containerd[1881]: time="2025-05-27T03:20:34.588946406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-krkwc,Uid:63cd9f6c-74e1-4a65-8a52-ed51845517e4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791\"" May 27 03:20:34.593500 containerd[1881]: time="2025-05-27T03:20:34.593464566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:20:34.612290 systemd-networkd[1721]: caliced496df0e0: Gained IPv6LL May 27 03:20:34.640814 containerd[1881]: time="2025-05-27T03:20:34.640704227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b95dd4f5c-lmmvx,Uid:ee378e2a-6923-47d9-907b-991e204e4fb5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322\"" May 27 03:20:34.950997 containerd[1881]: time="2025-05-27T03:20:34.950955439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bwzz8,Uid:4bb469da-f9bf-4f89-a896-e1bdeeceda6d,Namespace:calico-system,Attempt:0,}" May 27 03:20:35.081685 systemd-networkd[1721]: calibe7f3112166: Link UP May 27 03:20:35.084018 systemd-networkd[1721]: calibe7f3112166: Gained carrier May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.001 [INFO][5143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0 goldmane-8f77d7b6c- calico-system 4bb469da-f9bf-4f89-a896-e1bdeeceda6d 797 0 2025-05-27 03:20:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-86 goldmane-8f77d7b6c-bwzz8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibe7f3112166 [] [] }} ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.001 [INFO][5143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.034 [INFO][5155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" HandleID="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Workload="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.034 [INFO][5155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" HandleID="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Workload="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-86", "pod":"goldmane-8f77d7b6c-bwzz8", "timestamp":"2025-05-27 03:20:35.034324148 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.034 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.034 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.034 [INFO][5155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.042 [INFO][5155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.047 [INFO][5155] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.052 [INFO][5155] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.055 [INFO][5155] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.058 [INFO][5155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.058 [INFO][5155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.060 [INFO][5155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793 May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.065 [INFO][5155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.076 [INFO][5155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.5/26] block=192.168.31.0/26 handle="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.076 [INFO][5155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.5/26] handle="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" host="ip-172-31-29-86" May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.076 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:35.100468 containerd[1881]: 2025-05-27 03:20:35.076 [INFO][5155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.5/26] IPv6=[] ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" HandleID="k8s-pod-network.4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Workload="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" May 27 03:20:35.102365 containerd[1881]: 2025-05-27 03:20:35.079 [INFO][5143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"4bb469da-f9bf-4f89-a896-e1bdeeceda6d", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"goldmane-8f77d7b6c-bwzz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe7f3112166", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:35.102365 containerd[1881]: 2025-05-27 03:20:35.079 [INFO][5143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.5/32] ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" May 27 03:20:35.102365 containerd[1881]: 2025-05-27 03:20:35.079 [INFO][5143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe7f3112166 ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" May 27 03:20:35.102365 containerd[1881]: 2025-05-27 03:20:35.082 [INFO][5143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" May 27 03:20:35.102365 containerd[1881]: 2025-05-27 03:20:35.083 [INFO][5143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"4bb469da-f9bf-4f89-a896-e1bdeeceda6d", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793", Pod:"goldmane-8f77d7b6c-bwzz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe7f3112166", MAC:"ee:b8:97:66:ab:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:35.102365 containerd[1881]: 2025-05-27 03:20:35.095 [INFO][5143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bwzz8" WorkloadEndpoint="ip--172--31--29--86-k8s-goldmane--8f77d7b6c--bwzz8-eth0" May 27 03:20:35.147361 containerd[1881]: time="2025-05-27T03:20:35.147319007Z" level=info msg="connecting to shim 4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793" address="unix:///run/containerd/s/6dc61d1fafcfa520576b57fa75f7f401cd406acddfe3abbee0f47bb10e627dbb" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:35.179342 systemd[1]: Started cri-containerd-4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793.scope - libcontainer container 4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793. May 27 03:20:35.239921 containerd[1881]: time="2025-05-27T03:20:35.239658499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bwzz8,Uid:4bb469da-f9bf-4f89-a896-e1bdeeceda6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b61f168ea426a41cd97501bbe4b6bf41c314f35d0ec60bba3a9026150c79793\"" May 27 03:20:35.636226 systemd-networkd[1721]: calia25cac00a15: Gained IPv6LL May 27 03:20:35.892315 systemd-networkd[1721]: califcc5ba9eb10: Gained IPv6LL May 27 03:20:35.951175 containerd[1881]: time="2025-05-27T03:20:35.951123695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfbc5bcd-5fzgz,Uid:bf9ca194-f3ff-4b3d-a182-dced4faeca72,Namespace:calico-system,Attempt:0,}" May 27 03:20:36.284891 systemd-networkd[1721]: calib326bf443a2: Link UP May 27 03:20:36.287519 systemd-networkd[1721]: calib326bf443a2: Gained carrier May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.084 [INFO][5216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0 calico-kube-controllers-5bfbc5bcd- calico-system bf9ca194-f3ff-4b3d-a182-dced4faeca72 800 0 2025-05-27 03:20:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bfbc5bcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-86 calico-kube-controllers-5bfbc5bcd-5fzgz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib326bf443a2 [] [] }} ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.084 [INFO][5216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.184 [INFO][5233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" HandleID="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Workload="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.184 [INFO][5233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" HandleID="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Workload="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235000), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-86", "pod":"calico-kube-controllers-5bfbc5bcd-5fzgz", "timestamp":"2025-05-27 03:20:36.184205416 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.184 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.184 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.184 [INFO][5233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.197 [INFO][5233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.208 [INFO][5233] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.222 [INFO][5233] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.226 [INFO][5233] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.230 [INFO][5233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.231 [INFO][5233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.234 [INFO][5233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9 May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.245 [INFO][5233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.267 [INFO][5233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.6/26] block=192.168.31.0/26 handle="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.268 [INFO][5233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.6/26] handle="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" host="ip-172-31-29-86" May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.268 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:36.329944 containerd[1881]: 2025-05-27 03:20:36.268 [INFO][5233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.6/26] IPv6=[] ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" HandleID="k8s-pod-network.df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Workload="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" May 27 03:20:36.331914 containerd[1881]: 2025-05-27 03:20:36.274 [INFO][5216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0", GenerateName:"calico-kube-controllers-5bfbc5bcd-", Namespace:"calico-system", SelfLink:"", UID:"bf9ca194-f3ff-4b3d-a182-dced4faeca72", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfbc5bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"calico-kube-controllers-5bfbc5bcd-5fzgz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib326bf443a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:36.331914 containerd[1881]: 2025-05-27 03:20:36.274 [INFO][5216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.6/32] ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" May 27 03:20:36.331914 containerd[1881]: 2025-05-27 03:20:36.274 [INFO][5216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib326bf443a2 ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" May 27 03:20:36.331914 containerd[1881]: 2025-05-27 03:20:36.292 [INFO][5216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" May 27 03:20:36.331914 containerd[1881]: 2025-05-27 03:20:36.292 [INFO][5216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0", GenerateName:"calico-kube-controllers-5bfbc5bcd-", Namespace:"calico-system", SelfLink:"", UID:"bf9ca194-f3ff-4b3d-a182-dced4faeca72", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfbc5bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9", Pod:"calico-kube-controllers-5bfbc5bcd-5fzgz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib326bf443a2", MAC:"9a:f2:6a:29:0b:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:36.331914 containerd[1881]: 2025-05-27 03:20:36.317 [INFO][5216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" Namespace="calico-system" Pod="calico-kube-controllers-5bfbc5bcd-5fzgz" WorkloadEndpoint="ip--172--31--29--86-k8s-calico--kube--controllers--5bfbc5bcd--5fzgz-eth0" May 27 03:20:36.340251 systemd-networkd[1721]: calibe7f3112166: Gained IPv6LL May 27 03:20:36.402619 containerd[1881]: time="2025-05-27T03:20:36.402404674Z" level=info msg="connecting to shim df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9" address="unix:///run/containerd/s/4ed109a4bbf3a4c6004d7b7e0f1b8a2379bba7142983e4f5d337aa754379ad4c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:36.458522 systemd[1]: Started cri-containerd-df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9.scope - libcontainer container df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9. May 27 03:20:36.584688 containerd[1881]: time="2025-05-27T03:20:36.584574504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfbc5bcd-5fzgz,Uid:bf9ca194-f3ff-4b3d-a182-dced4faeca72,Namespace:calico-system,Attempt:0,} returns sandbox id \"df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9\"" May 27 03:20:36.952563 containerd[1881]: time="2025-05-27T03:20:36.952028965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zhsw,Uid:77d27265-0497-40a5-a7ec-8f874b3611ea,Namespace:calico-system,Attempt:0,}" May 27 03:20:37.233419 systemd-networkd[1721]: cali50808cdcecd: Link UP May 27 03:20:37.237260 systemd-networkd[1721]: cali50808cdcecd: Gained carrier May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.069 [INFO][5297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0 csi-node-driver- calico-system 77d27265-0497-40a5-a7ec-8f874b3611ea 686 0 2025-05-27 03:20:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-86 csi-node-driver-5zhsw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali50808cdcecd [] [] }} ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.069 [INFO][5297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.156 [INFO][5310] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" HandleID="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Workload="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.156 [INFO][5310] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" HandleID="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Workload="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9300), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-86", "pod":"csi-node-driver-5zhsw", "timestamp":"2025-05-27 03:20:37.155980591 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.156 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.156 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.157 [INFO][5310] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.168 [INFO][5310] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.176 [INFO][5310] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.185 [INFO][5310] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.189 [INFO][5310] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.195 [INFO][5310] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.195 [INFO][5310] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.200 [INFO][5310] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.207 [INFO][5310] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.221 [INFO][5310] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.7/26] block=192.168.31.0/26 handle="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.221 [INFO][5310] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.7/26] handle="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" host="ip-172-31-29-86" May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.221 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:37.271236 containerd[1881]: 2025-05-27 03:20:37.221 [INFO][5310] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.7/26] IPv6=[] ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" HandleID="k8s-pod-network.ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Workload="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" May 27 03:20:37.272643 containerd[1881]: 2025-05-27 03:20:37.225 [INFO][5297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77d27265-0497-40a5-a7ec-8f874b3611ea", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"csi-node-driver-5zhsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50808cdcecd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:37.272643 containerd[1881]: 2025-05-27 03:20:37.225 [INFO][5297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.7/32] ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" May 27 03:20:37.272643 containerd[1881]: 2025-05-27 03:20:37.225 [INFO][5297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50808cdcecd ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" May 27 03:20:37.272643 containerd[1881]: 2025-05-27 03:20:37.238 [INFO][5297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" May 27 03:20:37.272643 containerd[1881]: 2025-05-27 03:20:37.239 [INFO][5297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77d27265-0497-40a5-a7ec-8f874b3611ea", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf", Pod:"csi-node-driver-5zhsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50808cdcecd", MAC:"f2:99:07:66:47:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:37.272643 containerd[1881]: 2025-05-27 03:20:37.262 [INFO][5297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" Namespace="calico-system" Pod="csi-node-driver-5zhsw" WorkloadEndpoint="ip--172--31--29--86-k8s-csi--node--driver--5zhsw-eth0" May 27 03:20:37.345093 containerd[1881]: time="2025-05-27T03:20:37.344514041Z" level=info msg="connecting to shim ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf" address="unix:///run/containerd/s/554263b623bc6c0d9cccd94042d3b7656807e5864db3a924bc555a9c4c5e86cb" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:37.392526 systemd[1]: Started cri-containerd-ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf.scope - libcontainer container ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf. May 27 03:20:37.457872 containerd[1881]: time="2025-05-27T03:20:37.457833135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zhsw,Uid:77d27265-0497-40a5-a7ec-8f874b3611ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf\"" May 27 03:20:37.620260 systemd-networkd[1721]: calib326bf443a2: Gained IPv6LL May 27 03:20:37.752857 containerd[1881]: time="2025-05-27T03:20:37.752796965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:37.756256 containerd[1881]: time="2025-05-27T03:20:37.756023958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:20:37.762518 containerd[1881]: time="2025-05-27T03:20:37.762464616Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:37.769374 containerd[1881]: time="2025-05-27T03:20:37.769027703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:37.769798 containerd[1881]: time="2025-05-27T03:20:37.769767679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.17599519s" May 27 03:20:37.769864 containerd[1881]: time="2025-05-27T03:20:37.769803951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:20:37.771376 containerd[1881]: time="2025-05-27T03:20:37.771343862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:20:37.778568 containerd[1881]: time="2025-05-27T03:20:37.778529780Z" level=info msg="CreateContainer within sandbox \"58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:20:37.789106 containerd[1881]: time="2025-05-27T03:20:37.788817539Z" level=info msg="Container ca38870ce6b93918b08facb5dbdb403f0d73c71c14f1c194054c5bc69b105ae5: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:37.800580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695013726.mount: Deactivated successfully. May 27 03:20:37.805250 containerd[1881]: time="2025-05-27T03:20:37.805209958Z" level=info msg="CreateContainer within sandbox \"58ccb95f640d80493e6f0f5adcbc7fce60d25ab4dc17d95ed3159cea5903e791\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca38870ce6b93918b08facb5dbdb403f0d73c71c14f1c194054c5bc69b105ae5\"" May 27 03:20:37.811244 containerd[1881]: time="2025-05-27T03:20:37.811205304Z" level=info msg="StartContainer for \"ca38870ce6b93918b08facb5dbdb403f0d73c71c14f1c194054c5bc69b105ae5\"" May 27 03:20:37.813617 containerd[1881]: time="2025-05-27T03:20:37.813576487Z" level=info msg="connecting to shim ca38870ce6b93918b08facb5dbdb403f0d73c71c14f1c194054c5bc69b105ae5" address="unix:///run/containerd/s/835a16c11f0c540a8fcb3e1c08974b7886ed5469d1844b936204df1ccf6c134f" protocol=ttrpc version=3 May 27 03:20:37.853386 systemd[1]: Started cri-containerd-ca38870ce6b93918b08facb5dbdb403f0d73c71c14f1c194054c5bc69b105ae5.scope - libcontainer container ca38870ce6b93918b08facb5dbdb403f0d73c71c14f1c194054c5bc69b105ae5. May 27 03:20:37.936987 containerd[1881]: time="2025-05-27T03:20:37.936251141Z" level=info msg="StartContainer for \"ca38870ce6b93918b08facb5dbdb403f0d73c71c14f1c194054c5bc69b105ae5\" returns successfully" May 27 03:20:37.951283 containerd[1881]: time="2025-05-27T03:20:37.951234539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mfctx,Uid:62a9d7fb-b029-4eeb-bbd7-6774fa0d0974,Namespace:kube-system,Attempt:0,}" May 27 03:20:38.090378 containerd[1881]: time="2025-05-27T03:20:38.090194112Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:38.094015 containerd[1881]: time="2025-05-27T03:20:38.093956550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:20:38.101494 containerd[1881]: time="2025-05-27T03:20:38.101298060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 329.915831ms" May 27 03:20:38.101994 containerd[1881]: time="2025-05-27T03:20:38.101941661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:20:38.104682 containerd[1881]: time="2025-05-27T03:20:38.104533903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:20:38.122813 containerd[1881]: time="2025-05-27T03:20:38.122767962Z" level=info msg="CreateContainer within sandbox \"a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:20:38.136611 containerd[1881]: time="2025-05-27T03:20:38.135583065Z" level=info msg="Container 9397f9b322577090f24861f62b7fcc4f209baffb33fad089b87722c9a80d467f: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:38.160803 containerd[1881]: time="2025-05-27T03:20:38.160233108Z" level=info msg="CreateContainer within sandbox \"a5bf4d6244b94496f906f0d066ff5556a08699d9a924b97be466a9b4832a8322\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9397f9b322577090f24861f62b7fcc4f209baffb33fad089b87722c9a80d467f\"" May 27 03:20:38.163875 containerd[1881]: time="2025-05-27T03:20:38.163837701Z" level=info msg="StartContainer for \"9397f9b322577090f24861f62b7fcc4f209baffb33fad089b87722c9a80d467f\"" May 27 03:20:38.164267 systemd-networkd[1721]: calid4038fde16e: Link UP May 27 03:20:38.166200 systemd-networkd[1721]: calid4038fde16e: Gained carrier May 27 03:20:38.177736 containerd[1881]: time="2025-05-27T03:20:38.177673559Z" level=info msg="connecting to shim 9397f9b322577090f24861f62b7fcc4f209baffb33fad089b87722c9a80d467f" address="unix:///run/containerd/s/4631ad3e3305819be0a0cbea5e2ef7532b97c273cc5aa838d46b8703089fa268" protocol=ttrpc version=3 May 27 03:20:38.207360 systemd[1]: Started cri-containerd-9397f9b322577090f24861f62b7fcc4f209baffb33fad089b87722c9a80d467f.scope - libcontainer container 9397f9b322577090f24861f62b7fcc4f209baffb33fad089b87722c9a80d467f. May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.020 [INFO][5414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0 coredns-7c65d6cfc9- kube-system 62a9d7fb-b029-4eeb-bbd7-6774fa0d0974 798 0 2025-05-27 03:19:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-86 coredns-7c65d6cfc9-mfctx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4038fde16e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.022 [INFO][5414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.065 [INFO][5430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" HandleID="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Workload="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.065 [INFO][5430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" HandleID="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Workload="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235020), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-86", "pod":"coredns-7c65d6cfc9-mfctx", "timestamp":"2025-05-27 03:20:38.065405401 +0000 UTC"}, Hostname:"ip-172-31-29-86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.065 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.066 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.066 [INFO][5430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-86' May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.078 [INFO][5430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.087 [INFO][5430] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.097 [INFO][5430] ipam/ipam.go 511: Trying affinity for 192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.102 [INFO][5430] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.111 [INFO][5430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.111 [INFO][5430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.116 [INFO][5430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172 May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.132 [INFO][5430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.147 [INFO][5430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.8/26] block=192.168.31.0/26 handle="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.148 [INFO][5430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.8/26] handle="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" host="ip-172-31-29-86" May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.148 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:20:38.211201 containerd[1881]: 2025-05-27 03:20:38.149 [INFO][5430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.8/26] IPv6=[] ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" HandleID="k8s-pod-network.faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Workload="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" May 27 03:20:38.213913 containerd[1881]: 2025-05-27 03:20:38.153 [INFO][5414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"62a9d7fb-b029-4eeb-bbd7-6774fa0d0974", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 19, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"", Pod:"coredns-7c65d6cfc9-mfctx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4038fde16e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:38.213913 containerd[1881]: 2025-05-27 03:20:38.154 [INFO][5414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.8/32] ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" May 27 03:20:38.213913 containerd[1881]: 2025-05-27 03:20:38.154 [INFO][5414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4038fde16e ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" May 27 03:20:38.213913 containerd[1881]: 2025-05-27 03:20:38.167 [INFO][5414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" May 27 03:20:38.213913 containerd[1881]: 2025-05-27 03:20:38.169 [INFO][5414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"62a9d7fb-b029-4eeb-bbd7-6774fa0d0974", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 19, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-86", ContainerID:"faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172", Pod:"coredns-7c65d6cfc9-mfctx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4038fde16e", MAC:"6a:37:ca:cd:06:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:20:38.213913 containerd[1881]: 2025-05-27 03:20:38.196 [INFO][5414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mfctx" WorkloadEndpoint="ip--172--31--29--86-k8s-coredns--7c65d6cfc9--mfctx-eth0" May 27 03:20:38.293100 containerd[1881]: time="2025-05-27T03:20:38.292285929Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:20:38.296164 containerd[1881]: time="2025-05-27T03:20:38.295446554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:20:38.298252 containerd[1881]: time="2025-05-27T03:20:38.298195841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:20:38.307628 containerd[1881]: time="2025-05-27T03:20:38.307543224Z" level=info msg="connecting to shim faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172" address="unix:///run/containerd/s/25f0f262163af87e253eeea430063dc66b444748aab79af76489cca2ea2f839f" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:38.340304 kubelet[3223]: E0527 03:20:38.340127 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:20:38.346213 kubelet[3223]: E0527 03:20:38.345109 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:20:38.351973 containerd[1881]: time="2025-05-27T03:20:38.350379556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:20:38.369458 kubelet[3223]: E0527 03:20:38.369340 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggznq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-bwzz8_calico-system(4bb469da-f9bf-4f89-a896-e1bdeeceda6d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:20:38.377151 kubelet[3223]: E0527 03:20:38.377093 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:20:38.444966 containerd[1881]: time="2025-05-27T03:20:38.444344001Z" level=info msg="StartContainer for \"9397f9b322577090f24861f62b7fcc4f209baffb33fad089b87722c9a80d467f\" returns successfully" May 27 03:20:38.447734 systemd[1]: Started cri-containerd-faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172.scope - libcontainer container faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172. May 27 03:20:38.512042 kubelet[3223]: I0527 03:20:38.511954 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-krkwc" podStartSLOduration=27.332879426 podStartE2EDuration="30.511820406s" podCreationTimestamp="2025-05-27 03:20:08 +0000 UTC" firstStartedPulling="2025-05-27 03:20:34.592263852 +0000 UTC m=+43.812512550" lastFinishedPulling="2025-05-27 03:20:37.771204852 +0000 UTC m=+46.991453530" observedRunningTime="2025-05-27 03:20:38.511208238 +0000 UTC m=+47.731456938" watchObservedRunningTime="2025-05-27 03:20:38.511820406 +0000 UTC m=+47.732069108" May 27 03:20:38.516303 systemd-networkd[1721]: cali50808cdcecd: Gained IPv6LL May 27 03:20:38.569526 containerd[1881]: time="2025-05-27T03:20:38.569482163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mfctx,Uid:62a9d7fb-b029-4eeb-bbd7-6774fa0d0974,Namespace:kube-system,Attempt:0,} returns sandbox id \"faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172\"" May 27 03:20:38.572889 containerd[1881]: time="2025-05-27T03:20:38.572852317Z" level=info msg="CreateContainer within sandbox \"faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:20:38.598994 containerd[1881]: time="2025-05-27T03:20:38.598342316Z" level=info msg="Container cf9b8c1912969bf0a3c652ec5a19a450a10c2454dbe6fde9e938d994d57a3500: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:38.620882 containerd[1881]: time="2025-05-27T03:20:38.620798569Z" level=info msg="CreateContainer within sandbox \"faeff9e2d595cce6a42686b027f7f244803132f66840b7998486d82758c65172\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf9b8c1912969bf0a3c652ec5a19a450a10c2454dbe6fde9e938d994d57a3500\"" May 27 03:20:38.626531 containerd[1881]: time="2025-05-27T03:20:38.622852867Z" level=info msg="StartContainer for \"cf9b8c1912969bf0a3c652ec5a19a450a10c2454dbe6fde9e938d994d57a3500\"" May 27 03:20:38.626531 containerd[1881]: time="2025-05-27T03:20:38.623855678Z" level=info msg="connecting to shim cf9b8c1912969bf0a3c652ec5a19a450a10c2454dbe6fde9e938d994d57a3500" address="unix:///run/containerd/s/25f0f262163af87e253eeea430063dc66b444748aab79af76489cca2ea2f839f" protocol=ttrpc version=3 May 27 03:20:38.660706 systemd[1]: Started cri-containerd-cf9b8c1912969bf0a3c652ec5a19a450a10c2454dbe6fde9e938d994d57a3500.scope - libcontainer container cf9b8c1912969bf0a3c652ec5a19a450a10c2454dbe6fde9e938d994d57a3500. May 27 03:20:38.743540 containerd[1881]: time="2025-05-27T03:20:38.743127672Z" level=info msg="StartContainer for \"cf9b8c1912969bf0a3c652ec5a19a450a10c2454dbe6fde9e938d994d57a3500\" returns successfully" May 27 03:20:39.415016 kubelet[3223]: I0527 03:20:39.414452 3223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:20:39.417749 kubelet[3223]: E0527 03:20:39.417714 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:20:39.502763 kubelet[3223]: I0527 03:20:39.502406 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b95dd4f5c-lmmvx" podStartSLOduration=28.040244601 podStartE2EDuration="31.502067412s" podCreationTimestamp="2025-05-27 03:20:08 +0000 UTC" firstStartedPulling="2025-05-27 03:20:34.642334719 +0000 UTC m=+43.862583410" lastFinishedPulling="2025-05-27 03:20:38.104157531 +0000 UTC m=+47.324406221" observedRunningTime="2025-05-27 03:20:39.474299462 +0000 UTC m=+48.694548161" watchObservedRunningTime="2025-05-27 03:20:39.502067412 +0000 UTC m=+48.722316110" May 27 03:20:39.797976 systemd-networkd[1721]: calid4038fde16e: Gained IPv6LL May 27 03:20:40.712326 systemd[1]: Started sshd@12-172.31.29.86:22-139.178.68.195:55994.service - OpenSSH per-connection server daemon (139.178.68.195:55994). May 27 03:20:41.004265 sshd[5570]: Accepted publickey for core from 139.178.68.195 port 55994 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:20:41.009790 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:41.026979 systemd-logind[1869]: New session 10 of user core. May 27 03:20:41.032846 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:20:41.423560 kubelet[3223]: I0527 03:20:41.422653 3223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:20:42.191919 ntpd[1862]: Listen normally on 7 vxlan.calico 192.168.31.0:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 7 vxlan.calico 192.168.31.0:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 8 cali6899d06d3f3 [fe80::ecee:eeff:feee:eeee%4]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 9 vxlan.calico [fe80::64f2:3ff:fe6f:6cba%5]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 10 caliced496df0e0 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 11 calia25cac00a15 [fe80::ecee:eeff:feee:eeee%9]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 12 califcc5ba9eb10 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 13 calibe7f3112166 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 14 calib326bf443a2 [fe80::ecee:eeff:feee:eeee%12]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 15 cali50808cdcecd [fe80::ecee:eeff:feee:eeee%13]:123 May 27 03:20:42.194431 ntpd[1862]: 27 May 03:20:42 ntpd[1862]: Listen normally on 16 calid4038fde16e [fe80::ecee:eeff:feee:eeee%14]:123 May 27 03:20:42.192020 ntpd[1862]: Listen normally on 8 cali6899d06d3f3 [fe80::ecee:eeff:feee:eeee%4]:123 May 27 03:20:42.192100 ntpd[1862]: Listen normally on 9 vxlan.calico [fe80::64f2:3ff:fe6f:6cba%5]:123 May 27 03:20:42.192144 ntpd[1862]: Listen normally on 10 caliced496df0e0 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 03:20:42.192193 ntpd[1862]: Listen normally on 11 calia25cac00a15 [fe80::ecee:eeff:feee:eeee%9]:123 May 27 03:20:42.192228 ntpd[1862]: Listen normally on 12 califcc5ba9eb10 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 03:20:42.192263 ntpd[1862]: Listen normally on 13 calibe7f3112166 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 03:20:42.192295 ntpd[1862]: Listen normally on 14 calib326bf443a2 [fe80::ecee:eeff:feee:eeee%12]:123 May 27 03:20:42.192329 ntpd[1862]: Listen normally on 15 cali50808cdcecd [fe80::ecee:eeff:feee:eeee%13]:123 May 27 03:20:42.192366 ntpd[1862]: Listen normally on 16 calid4038fde16e [fe80::ecee:eeff:feee:eeee%14]:123 May 27 03:20:42.602902 sshd[5576]: Connection closed by 139.178.68.195 port 55994 May 27 03:20:42.605308 sshd-session[5570]: pam_unix(sshd:session): session closed for user core May 27 03:20:42.615994 systemd[1]: sshd@12-172.31.29.86:22-139.178.68.195:55994.service: Deactivated successfully. May 27 03:20:42.628869 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:20:42.639871 systemd-logind[1869]: Session 10 logged out. Waiting for processes to exit. May 27 03:20:42.645307 systemd-logind[1869]: Removed session 10. May 27 03:20:43.178687 kubelet[3223]: I0527 03:20:43.178615 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mfctx" podStartSLOduration=46.178589053 podStartE2EDuration="46.178589053s" podCreationTimestamp="2025-05-27 03:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:20:39.526128462 +0000 UTC m=+48.746377159" watchObservedRunningTime="2025-05-27 03:20:43.178589053 +0000 UTC m=+52.398837749" May 27 03:20:43.767522 containerd[1881]: time="2025-05-27T03:20:43.767312257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:43.769880 containerd[1881]: time="2025-05-27T03:20:43.769304084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:20:43.770909 containerd[1881]: time="2025-05-27T03:20:43.770436832Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:43.787923 containerd[1881]: time="2025-05-27T03:20:43.787605945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:43.789764 containerd[1881]: time="2025-05-27T03:20:43.788882276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 5.438459139s" May 27 03:20:43.790014 containerd[1881]: time="2025-05-27T03:20:43.789994956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:20:43.793448 containerd[1881]: time="2025-05-27T03:20:43.793420498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:20:43.870724 containerd[1881]: time="2025-05-27T03:20:43.870695103Z" level=info msg="CreateContainer within sandbox \"df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:20:44.077102 containerd[1881]: time="2025-05-27T03:20:44.074432041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\" id:\"cbd9771af1fa225eba21b1533093f41857de95c1648f7fef8fe04afbb990d298\" pid:5617 exited_at:{seconds:1748316043 nanos:924487370}" May 27 03:20:44.077666 containerd[1881]: time="2025-05-27T03:20:44.077516235Z" level=info msg="Container 4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:44.198799 containerd[1881]: time="2025-05-27T03:20:44.198737339Z" level=info msg="CreateContainer within sandbox \"df58dd38b7f4d8b6aba068f607653244ae0316df3c6e8fdc28f288f6442625e9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\"" May 27 03:20:44.199418 containerd[1881]: time="2025-05-27T03:20:44.199386031Z" level=info msg="StartContainer for \"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\"" May 27 03:20:44.203876 containerd[1881]: time="2025-05-27T03:20:44.203828347Z" level=info msg="connecting to shim 4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6" address="unix:///run/containerd/s/4ed109a4bbf3a4c6004d7b7e0f1b8a2379bba7142983e4f5d337aa754379ad4c" protocol=ttrpc version=3 May 27 03:20:44.245648 systemd[1]: Started cri-containerd-4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6.scope - libcontainer container 4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6. May 27 03:20:44.391623 containerd[1881]: time="2025-05-27T03:20:44.391053065Z" level=info msg="StartContainer for \"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" returns successfully" May 27 03:20:44.565713 containerd[1881]: time="2025-05-27T03:20:44.565626025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"682a0914764d6cd2188d41a354625213d36c60e923c068b16920d9c1a9ed2426\" pid:5682 exit_status:1 exited_at:{seconds:1748316044 nanos:559744234}" May 27 03:20:45.520359 containerd[1881]: time="2025-05-27T03:20:45.520303521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"3cbb3bdbaeac79ca7b84ade2fbb427841a220c89f26459f906c6440fc57aecf8\" pid:5709 exited_at:{seconds:1748316045 nanos:519893543}" May 27 03:20:45.563904 kubelet[3223]: I0527 03:20:45.562598 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bfbc5bcd-5fzgz" podStartSLOduration=25.36103485 podStartE2EDuration="32.562576549s" podCreationTimestamp="2025-05-27 03:20:13 +0000 UTC" firstStartedPulling="2025-05-27 03:20:36.591443145 +0000 UTC m=+45.811691835" lastFinishedPulling="2025-05-27 03:20:43.792984842 +0000 UTC m=+53.013233534" observedRunningTime="2025-05-27 03:20:44.485189157 +0000 UTC m=+53.705437856" watchObservedRunningTime="2025-05-27 03:20:45.562576549 +0000 UTC m=+54.782825249" May 27 03:20:45.778638 containerd[1881]: time="2025-05-27T03:20:45.778512125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:45.780562 containerd[1881]: time="2025-05-27T03:20:45.780508755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:20:45.782840 containerd[1881]: time="2025-05-27T03:20:45.782782867Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:45.786368 containerd[1881]: time="2025-05-27T03:20:45.786212984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:45.786946 containerd[1881]: time="2025-05-27T03:20:45.786890427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.993408661s" May 27 03:20:45.786946 containerd[1881]: time="2025-05-27T03:20:45.786929153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:20:45.792675 containerd[1881]: time="2025-05-27T03:20:45.792637723Z" level=info msg="CreateContainer within sandbox \"ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:20:45.853099 containerd[1881]: time="2025-05-27T03:20:45.851317967Z" level=info msg="Container d3aaadb142d0321100ed78546c6e2f36807828fe218c1ca56afbb87044901a3e: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:45.859530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297956896.mount: Deactivated successfully. May 27 03:20:45.896039 containerd[1881]: time="2025-05-27T03:20:45.895989060Z" level=info msg="CreateContainer within sandbox \"ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d3aaadb142d0321100ed78546c6e2f36807828fe218c1ca56afbb87044901a3e\"" May 27 03:20:45.896722 containerd[1881]: time="2025-05-27T03:20:45.896694611Z" level=info msg="StartContainer for \"d3aaadb142d0321100ed78546c6e2f36807828fe218c1ca56afbb87044901a3e\"" May 27 03:20:45.898299 containerd[1881]: time="2025-05-27T03:20:45.898263782Z" level=info msg="connecting to shim d3aaadb142d0321100ed78546c6e2f36807828fe218c1ca56afbb87044901a3e" address="unix:///run/containerd/s/554263b623bc6c0d9cccd94042d3b7656807e5864db3a924bc555a9c4c5e86cb" protocol=ttrpc version=3 May 27 03:20:45.922267 systemd[1]: Started cri-containerd-d3aaadb142d0321100ed78546c6e2f36807828fe218c1ca56afbb87044901a3e.scope - libcontainer container d3aaadb142d0321100ed78546c6e2f36807828fe218c1ca56afbb87044901a3e. May 27 03:20:46.044379 containerd[1881]: time="2025-05-27T03:20:46.044332002Z" level=info msg="StartContainer for \"d3aaadb142d0321100ed78546c6e2f36807828fe218c1ca56afbb87044901a3e\" returns successfully" May 27 03:20:46.052817 containerd[1881]: time="2025-05-27T03:20:46.052592955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:20:47.568199 containerd[1881]: time="2025-05-27T03:20:47.568144807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:47.570822 containerd[1881]: time="2025-05-27T03:20:47.570774867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:20:47.573101 containerd[1881]: time="2025-05-27T03:20:47.572988093Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:47.577536 containerd[1881]: time="2025-05-27T03:20:47.577020965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:47.577536 containerd[1881]: time="2025-05-27T03:20:47.577391864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.524748405s" May 27 03:20:47.577536 containerd[1881]: time="2025-05-27T03:20:47.577431830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:20:47.579325 containerd[1881]: time="2025-05-27T03:20:47.579283479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:20:47.581704 containerd[1881]: time="2025-05-27T03:20:47.580945788Z" level=info msg="CreateContainer within sandbox \"ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:20:47.604398 containerd[1881]: time="2025-05-27T03:20:47.600232753Z" level=info msg="Container cc9e23fe620feedb607f57f063a7140a3da0a8329f1355ed9dae221fdeeea9a6: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:47.616059 containerd[1881]: time="2025-05-27T03:20:47.616001602Z" level=info msg="CreateContainer within sandbox \"ebfbd5afc823c0debf5ef98b362feaae9090931209e3dfbcbbd0b0ffe40027cf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cc9e23fe620feedb607f57f063a7140a3da0a8329f1355ed9dae221fdeeea9a6\"" May 27 03:20:47.616903 containerd[1881]: time="2025-05-27T03:20:47.616868438Z" level=info msg="StartContainer for \"cc9e23fe620feedb607f57f063a7140a3da0a8329f1355ed9dae221fdeeea9a6\"" May 27 03:20:47.619336 containerd[1881]: time="2025-05-27T03:20:47.619298157Z" level=info msg="connecting to shim cc9e23fe620feedb607f57f063a7140a3da0a8329f1355ed9dae221fdeeea9a6" address="unix:///run/containerd/s/554263b623bc6c0d9cccd94042d3b7656807e5864db3a924bc555a9c4c5e86cb" protocol=ttrpc version=3 May 27 03:20:47.669560 systemd[1]: Started sshd@13-172.31.29.86:22-139.178.68.195:59550.service - OpenSSH per-connection server daemon (139.178.68.195:59550). May 27 03:20:47.689346 systemd[1]: Started cri-containerd-cc9e23fe620feedb607f57f063a7140a3da0a8329f1355ed9dae221fdeeea9a6.scope - libcontainer container cc9e23fe620feedb607f57f063a7140a3da0a8329f1355ed9dae221fdeeea9a6. May 27 03:20:47.774475 containerd[1881]: time="2025-05-27T03:20:47.774428889Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:20:47.782484 containerd[1881]: time="2025-05-27T03:20:47.782168051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:20:47.786106 containerd[1881]: time="2025-05-27T03:20:47.785462327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:20:47.787837 containerd[1881]: time="2025-05-27T03:20:47.787799369Z" level=info msg="StartContainer for \"cc9e23fe620feedb607f57f063a7140a3da0a8329f1355ed9dae221fdeeea9a6\" returns successfully" May 27 03:20:47.789086 kubelet[3223]: E0527 03:20:47.788835 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:20:47.790682 kubelet[3223]: E0527 03:20:47.790132 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:20:47.794313 kubelet[3223]: E0527 03:20:47.794240 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:434808263c7f46d09d8c48091d884722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:20:47.799953 containerd[1881]: time="2025-05-27T03:20:47.799353160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:20:47.944301 sshd[5769]: Accepted publickey for core from 139.178.68.195 port 59550 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:20:47.950184 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:47.956153 systemd-logind[1869]: New session 11 of user core. May 27 03:20:47.961314 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:20:47.994423 containerd[1881]: time="2025-05-27T03:20:47.994379598Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:20:47.996516 containerd[1881]: time="2025-05-27T03:20:47.996461208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:20:47.996654 containerd[1881]: time="2025-05-27T03:20:47.996565900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:20:47.996867 kubelet[3223]: E0527 03:20:47.996785 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:20:47.996867 kubelet[3223]: E0527 03:20:47.996848 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:20:47.997634 kubelet[3223]: E0527 03:20:47.997535 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:20:47.998880 kubelet[3223]: E0527 03:20:47.998780 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:20:48.525813 kubelet[3223]: I0527 03:20:48.516490 3223 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:20:48.536202 kubelet[3223]: I0527 03:20:48.536133 3223 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:20:49.192185 sshd[5791]: Connection closed by 139.178.68.195 port 59550 May 27 03:20:49.192742 sshd-session[5769]: pam_unix(sshd:session): session closed for user core May 27 03:20:49.197056 systemd[1]: sshd@13-172.31.29.86:22-139.178.68.195:59550.service: Deactivated successfully. May 27 03:20:49.199393 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:20:49.201500 systemd-logind[1869]: Session 11 logged out. Waiting for processes to exit. May 27 03:20:49.204412 systemd-logind[1869]: Removed session 11. May 27 03:20:52.216264 containerd[1881]: time="2025-05-27T03:20:52.216209527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"4ff22fe7ac45e568293aa5b5accae6ca3ac91e4750b630db874a17fa37537fa3\" pid:5819 exited_at:{seconds:1748316052 nanos:215856609}" May 27 03:20:54.225247 systemd[1]: Started sshd@14-172.31.29.86:22-139.178.68.195:44520.service - OpenSSH per-connection server daemon (139.178.68.195:44520). May 27 03:20:54.439122 sshd[5836]: Accepted publickey for core from 139.178.68.195 port 44520 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:20:54.440896 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:54.447224 systemd-logind[1869]: New session 12 of user core. May 27 03:20:54.452273 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:20:54.826088 sshd[5838]: Connection closed by 139.178.68.195 port 44520 May 27 03:20:54.827112 sshd-session[5836]: pam_unix(sshd:session): session closed for user core May 27 03:20:54.832199 systemd[1]: sshd@14-172.31.29.86:22-139.178.68.195:44520.service: Deactivated successfully. May 27 03:20:54.834750 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:20:54.836682 systemd-logind[1869]: Session 12 logged out. Waiting for processes to exit. May 27 03:20:54.838542 systemd-logind[1869]: Removed session 12. May 27 03:20:54.859764 systemd[1]: Started sshd@15-172.31.29.86:22-139.178.68.195:44530.service - OpenSSH per-connection server daemon (139.178.68.195:44530). May 27 03:20:54.960610 containerd[1881]: time="2025-05-27T03:20:54.960506963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:20:54.987431 kubelet[3223]: I0527 03:20:54.981065 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5zhsw" podStartSLOduration=32.866079941 podStartE2EDuration="42.981038452s" podCreationTimestamp="2025-05-27 03:20:12 +0000 UTC" firstStartedPulling="2025-05-27 03:20:37.463782736 +0000 UTC m=+46.684031426" lastFinishedPulling="2025-05-27 03:20:47.578741247 +0000 UTC m=+56.798989937" observedRunningTime="2025-05-27 03:20:48.548061891 +0000 UTC m=+57.768310592" watchObservedRunningTime="2025-05-27 03:20:54.981038452 +0000 UTC m=+64.201287152" May 27 03:20:55.043828 sshd[5851]: Accepted publickey for core from 139.178.68.195 port 44530 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:20:55.044419 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:55.051858 systemd-logind[1869]: New session 13 of user core. May 27 03:20:55.061301 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:20:55.143712 containerd[1881]: time="2025-05-27T03:20:55.143522719Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:20:55.145706 containerd[1881]: time="2025-05-27T03:20:55.145652871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:20:55.145863 containerd[1881]: time="2025-05-27T03:20:55.145758921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:20:55.146473 kubelet[3223]: E0527 03:20:55.146217 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:20:55.146473 kubelet[3223]: E0527 03:20:55.146268 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:20:55.146473 kubelet[3223]: E0527 03:20:55.146410 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggznq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-bwzz8_calico-system(4bb469da-f9bf-4f89-a896-e1bdeeceda6d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:20:55.147776 kubelet[3223]: E0527 03:20:55.147739 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:20:55.420462 sshd[5853]: Connection closed by 139.178.68.195 port 44530 May 27 03:20:55.423850 sshd-session[5851]: pam_unix(sshd:session): session closed for user core May 27 03:20:55.434551 systemd[1]: sshd@15-172.31.29.86:22-139.178.68.195:44530.service: Deactivated successfully. May 27 03:20:55.439803 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:20:55.443441 systemd-logind[1869]: Session 13 logged out. Waiting for processes to exit. May 27 03:20:55.469002 systemd[1]: Started sshd@16-172.31.29.86:22-139.178.68.195:44536.service - OpenSSH per-connection server daemon (139.178.68.195:44536). May 27 03:20:55.472587 systemd-logind[1869]: Removed session 13. May 27 03:20:55.675282 sshd[5862]: Accepted publickey for core from 139.178.68.195 port 44536 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:20:55.678864 sshd-session[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:55.686862 systemd-logind[1869]: New session 14 of user core. May 27 03:20:55.689274 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:20:55.979425 sshd[5864]: Connection closed by 139.178.68.195 port 44536 May 27 03:20:55.980146 sshd-session[5862]: pam_unix(sshd:session): session closed for user core May 27 03:20:55.986371 systemd[1]: sshd@16-172.31.29.86:22-139.178.68.195:44536.service: Deactivated successfully. May 27 03:20:55.989938 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:20:55.991538 systemd-logind[1869]: Session 14 logged out. Waiting for processes to exit. May 27 03:20:55.993161 systemd-logind[1869]: Removed session 14. May 27 03:20:58.970380 kubelet[3223]: E0527 03:20:58.970301 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:21:01.012256 systemd[1]: Started sshd@17-172.31.29.86:22-139.178.68.195:44548.service - OpenSSH per-connection server daemon (139.178.68.195:44548). May 27 03:21:01.240229 sshd[5884]: Accepted publickey for core from 139.178.68.195 port 44548 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:01.243598 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:01.251613 systemd-logind[1869]: New session 15 of user core. May 27 03:21:01.257368 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:21:01.796265 sshd[5886]: Connection closed by 139.178.68.195 port 44548 May 27 03:21:01.798977 sshd-session[5884]: pam_unix(sshd:session): session closed for user core May 27 03:21:01.816014 systemd[1]: sshd@17-172.31.29.86:22-139.178.68.195:44548.service: Deactivated successfully. May 27 03:21:01.821756 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:21:01.824442 systemd-logind[1869]: Session 15 logged out. Waiting for processes to exit. May 27 03:21:01.827655 systemd-logind[1869]: Removed session 15. May 27 03:21:06.831028 systemd[1]: Started sshd@18-172.31.29.86:22-139.178.68.195:49312.service - OpenSSH per-connection server daemon (139.178.68.195:49312). May 27 03:21:07.019380 sshd[5899]: Accepted publickey for core from 139.178.68.195 port 49312 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:07.020985 sshd-session[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:07.027769 systemd-logind[1869]: New session 16 of user core. May 27 03:21:07.030275 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:21:07.481980 sshd[5901]: Connection closed by 139.178.68.195 port 49312 May 27 03:21:07.483803 sshd-session[5899]: pam_unix(sshd:session): session closed for user core May 27 03:21:07.491423 systemd[1]: sshd@18-172.31.29.86:22-139.178.68.195:49312.service: Deactivated successfully. May 27 03:21:07.494893 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:21:07.495872 systemd-logind[1869]: Session 16 logged out. Waiting for processes to exit. May 27 03:21:07.497287 systemd-logind[1869]: Removed session 16. May 27 03:21:08.982739 kubelet[3223]: E0527 03:21:08.982120 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:21:10.989360 containerd[1881]: time="2025-05-27T03:21:10.989297029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:21:11.194329 containerd[1881]: time="2025-05-27T03:21:11.194202262Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:11.196564 containerd[1881]: time="2025-05-27T03:21:11.196482023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:11.201766 kubelet[3223]: E0527 03:21:11.201463 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:11.201766 kubelet[3223]: E0527 03:21:11.201558 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:11.214052 containerd[1881]: time="2025-05-27T03:21:11.196778768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:21:11.228053 kubelet[3223]: E0527 03:21:11.221384 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:434808263c7f46d09d8c48091d884722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:11.231880 containerd[1881]: time="2025-05-27T03:21:11.231844958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:21:11.447832 containerd[1881]: time="2025-05-27T03:21:11.447784187Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:11.450174 containerd[1881]: time="2025-05-27T03:21:11.450009660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:11.450174 containerd[1881]: time="2025-05-27T03:21:11.450127043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:21:11.450372 kubelet[3223]: E0527 03:21:11.450282 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:11.450372 kubelet[3223]: E0527 03:21:11.450329 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:11.450510 kubelet[3223]: E0527 03:21:11.450428 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:11.451933 kubelet[3223]: E0527 03:21:11.451883 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:21:12.537546 systemd[1]: Started sshd@19-172.31.29.86:22-139.178.68.195:49324.service - OpenSSH per-connection server daemon (139.178.68.195:49324). May 27 03:21:12.896155 sshd[5913]: Accepted publickey for core from 139.178.68.195 port 49324 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:12.906196 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:12.925812 systemd-logind[1869]: New session 17 of user core. May 27 03:21:12.932323 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:21:13.330527 containerd[1881]: time="2025-05-27T03:21:13.330480209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"7c81965caaa09ee5209bc3572ea077fe69866b5e3fab4c701592d53226c526fb\" pid:5937 exited_at:{seconds:1748316073 nanos:329992813}" May 27 03:21:13.746871 kubelet[3223]: I0527 03:21:13.746749 3223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:21:14.138789 sshd[5918]: Connection closed by 139.178.68.195 port 49324 May 27 03:21:14.139808 sshd-session[5913]: pam_unix(sshd:session): session closed for user core May 27 03:21:14.147845 systemd[1]: sshd@19-172.31.29.86:22-139.178.68.195:49324.service: Deactivated successfully. May 27 03:21:14.153391 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:21:14.155139 systemd-logind[1869]: Session 17 logged out. Waiting for processes to exit. May 27 03:21:14.160724 containerd[1881]: time="2025-05-27T03:21:14.160683620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\" id:\"318a7fc944b2984f4fbed2e27948c7ce95f0585d103dafa9aa6c540e0a0f310a\" pid:5959 exited_at:{seconds:1748316074 nanos:158796519}" May 27 03:21:14.161010 systemd-logind[1869]: Removed session 17. May 27 03:21:14.184579 systemd[1]: Started sshd@20-172.31.29.86:22-139.178.68.195:55108.service - OpenSSH per-connection server daemon (139.178.68.195:55108). May 27 03:21:14.389271 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 55108 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:14.391655 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:14.404303 systemd-logind[1869]: New session 18 of user core. May 27 03:21:14.410935 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:21:15.207455 sshd[5984]: Connection closed by 139.178.68.195 port 55108 May 27 03:21:15.211220 sshd-session[5982]: pam_unix(sshd:session): session closed for user core May 27 03:21:15.216347 systemd-logind[1869]: Session 18 logged out. Waiting for processes to exit. May 27 03:21:15.217789 systemd[1]: sshd@20-172.31.29.86:22-139.178.68.195:55108.service: Deactivated successfully. May 27 03:21:15.222063 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:21:15.226240 systemd-logind[1869]: Removed session 18. May 27 03:21:15.242983 systemd[1]: Started sshd@21-172.31.29.86:22-139.178.68.195:55122.service - OpenSSH per-connection server daemon (139.178.68.195:55122). May 27 03:21:15.507815 sshd[5994]: Accepted publickey for core from 139.178.68.195 port 55122 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:15.510402 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:15.519011 systemd-logind[1869]: New session 19 of user core. May 27 03:21:15.523486 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:21:19.617984 sshd[5996]: Connection closed by 139.178.68.195 port 55122 May 27 03:21:19.644339 sshd-session[5994]: pam_unix(sshd:session): session closed for user core May 27 03:21:19.684363 systemd[1]: sshd@21-172.31.29.86:22-139.178.68.195:55122.service: Deactivated successfully. May 27 03:21:19.691839 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:21:19.692208 systemd[1]: session-19.scope: Consumed 831ms CPU time, 73.9M memory peak. May 27 03:21:19.701628 systemd-logind[1869]: Session 19 logged out. Waiting for processes to exit. May 27 03:21:19.713317 systemd[1]: Started sshd@22-172.31.29.86:22-139.178.68.195:55124.service - OpenSSH per-connection server daemon (139.178.68.195:55124). May 27 03:21:19.728219 systemd-logind[1869]: Removed session 19. May 27 03:21:20.103742 sshd[6020]: Accepted publickey for core from 139.178.68.195 port 55124 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:20.113645 sshd-session[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:20.132171 systemd-logind[1869]: New session 20 of user core. May 27 03:21:20.140608 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:21:21.296799 containerd[1881]: time="2025-05-27T03:21:21.296596374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:21:21.574600 containerd[1881]: time="2025-05-27T03:21:21.574465425Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:21.579094 containerd[1881]: time="2025-05-27T03:21:21.577920427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:21.579094 containerd[1881]: time="2025-05-27T03:21:21.577979856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:21:21.672099 kubelet[3223]: E0527 03:21:21.639027 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:21.682696 kubelet[3223]: E0527 03:21:21.682633 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:21.777744 kubelet[3223]: E0527 03:21:21.777596 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggznq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-bwzz8_calico-system(4bb469da-f9bf-4f89-a896-e1bdeeceda6d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:21.782302 kubelet[3223]: E0527 03:21:21.782243 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:21:22.700368 sshd[6023]: Connection closed by 139.178.68.195 port 55124 May 27 03:21:22.708050 sshd-session[6020]: pam_unix(sshd:session): session closed for user core May 27 03:21:22.761491 systemd[1]: sshd@22-172.31.29.86:22-139.178.68.195:55124.service: Deactivated successfully. May 27 03:21:22.770290 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:21:22.771153 systemd[1]: session-20.scope: Consumed 940ms CPU time, 67.3M memory peak. May 27 03:21:22.777946 systemd-logind[1869]: Session 20 logged out. Waiting for processes to exit. May 27 03:21:22.786734 systemd[1]: Started sshd@23-172.31.29.86:22-139.178.68.195:55136.service - OpenSSH per-connection server daemon (139.178.68.195:55136). May 27 03:21:22.792251 systemd-logind[1869]: Removed session 20. May 27 03:21:22.879386 containerd[1881]: time="2025-05-27T03:21:22.879214811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"d9344991da466340bd3d3213c02f50979d1bd8b33de2a4387aee2d4b3b33f606\" pid:6043 exited_at:{seconds:1748316082 nanos:789617269}" May 27 03:21:23.098998 sshd[6057]: Accepted publickey for core from 139.178.68.195 port 55136 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:23.103024 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:23.121452 systemd-logind[1869]: New session 21 of user core. May 27 03:21:23.129305 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:21:23.627104 sshd[6060]: Connection closed by 139.178.68.195 port 55136 May 27 03:21:23.625308 sshd-session[6057]: pam_unix(sshd:session): session closed for user core May 27 03:21:23.639608 systemd[1]: sshd@23-172.31.29.86:22-139.178.68.195:55136.service: Deactivated successfully. May 27 03:21:23.646064 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:21:23.656539 systemd-logind[1869]: Session 21 logged out. Waiting for processes to exit. May 27 03:21:23.661313 systemd-logind[1869]: Removed session 21. May 27 03:21:25.990934 kubelet[3223]: E0527 03:21:25.990877 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:21:28.659249 systemd[1]: Started sshd@24-172.31.29.86:22-139.178.68.195:36934.service - OpenSSH per-connection server daemon (139.178.68.195:36934). May 27 03:21:28.873094 sshd[6075]: Accepted publickey for core from 139.178.68.195 port 36934 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:28.874229 sshd-session[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:28.883437 systemd-logind[1869]: New session 22 of user core. May 27 03:21:28.889306 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:21:29.160949 sshd[6077]: Connection closed by 139.178.68.195 port 36934 May 27 03:21:29.164334 sshd-session[6075]: pam_unix(sshd:session): session closed for user core May 27 03:21:29.172350 systemd[1]: sshd@24-172.31.29.86:22-139.178.68.195:36934.service: Deactivated successfully. May 27 03:21:29.176834 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:21:29.180651 systemd-logind[1869]: Session 22 logged out. Waiting for processes to exit. May 27 03:21:29.185185 systemd-logind[1869]: Removed session 22. May 27 03:21:33.952926 kubelet[3223]: E0527 03:21:33.952860 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:21:34.192839 systemd[1]: Started sshd@25-172.31.29.86:22-139.178.68.195:39972.service - OpenSSH per-connection server daemon (139.178.68.195:39972). May 27 03:21:34.448936 sshd[6091]: Accepted publickey for core from 139.178.68.195 port 39972 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:34.451483 sshd-session[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:34.458285 systemd-logind[1869]: New session 23 of user core. May 27 03:21:34.463351 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:21:34.720285 sshd[6093]: Connection closed by 139.178.68.195 port 39972 May 27 03:21:34.722466 sshd-session[6091]: pam_unix(sshd:session): session closed for user core May 27 03:21:34.728758 systemd[1]: sshd@25-172.31.29.86:22-139.178.68.195:39972.service: Deactivated successfully. May 27 03:21:34.732061 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:21:34.734678 systemd-logind[1869]: Session 23 logged out. Waiting for processes to exit. May 27 03:21:34.738257 systemd-logind[1869]: Removed session 23. May 27 03:21:39.759343 systemd[1]: Started sshd@26-172.31.29.86:22-139.178.68.195:39980.service - OpenSSH per-connection server daemon (139.178.68.195:39980). May 27 03:21:39.969301 sshd[6106]: Accepted publickey for core from 139.178.68.195 port 39980 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:39.971252 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:39.981578 systemd-logind[1869]: New session 24 of user core. May 27 03:21:39.988185 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:21:40.266061 sshd[6108]: Connection closed by 139.178.68.195 port 39980 May 27 03:21:40.267713 sshd-session[6106]: pam_unix(sshd:session): session closed for user core May 27 03:21:40.271645 systemd-logind[1869]: Session 24 logged out. Waiting for processes to exit. May 27 03:21:40.273765 systemd[1]: sshd@26-172.31.29.86:22-139.178.68.195:39980.service: Deactivated successfully. May 27 03:21:40.276924 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:21:40.280215 systemd-logind[1869]: Removed session 24. May 27 03:21:40.953927 kubelet[3223]: E0527 03:21:40.953844 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:21:44.288153 containerd[1881]: time="2025-05-27T03:21:44.273939456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\" id:\"8fb9d4b3b0f208b9972f849f86c2c3613ed381b3679a0b70c628c7e5c04050d3\" pid:6131 exited_at:{seconds:1748316104 nanos:261957990}" May 27 03:21:45.305000 systemd[1]: Started sshd@27-172.31.29.86:22-139.178.68.195:56214.service - OpenSSH per-connection server daemon (139.178.68.195:56214). May 27 03:21:45.587739 sshd[6144]: Accepted publickey for core from 139.178.68.195 port 56214 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:45.591229 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:45.599189 systemd-logind[1869]: New session 25 of user core. May 27 03:21:45.608346 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:21:46.485370 sshd[6147]: Connection closed by 139.178.68.195 port 56214 May 27 03:21:46.486279 sshd-session[6144]: pam_unix(sshd:session): session closed for user core May 27 03:21:46.493184 systemd-logind[1869]: Session 25 logged out. Waiting for processes to exit. May 27 03:21:46.493851 systemd[1]: sshd@27-172.31.29.86:22-139.178.68.195:56214.service: Deactivated successfully. May 27 03:21:46.498741 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:21:46.503730 systemd-logind[1869]: Removed session 25. May 27 03:21:47.967258 kubelet[3223]: E0527 03:21:47.951576 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:21:51.520997 systemd[1]: Started sshd@28-172.31.29.86:22-139.178.68.195:56224.service - OpenSSH per-connection server daemon (139.178.68.195:56224). May 27 03:21:51.716733 sshd[6161]: Accepted publickey for core from 139.178.68.195 port 56224 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:21:51.718878 sshd-session[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:51.726143 systemd-logind[1869]: New session 26 of user core. May 27 03:21:51.733320 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:21:51.990891 sshd[6163]: Connection closed by 139.178.68.195 port 56224 May 27 03:21:51.992217 sshd-session[6161]: pam_unix(sshd:session): session closed for user core May 27 03:21:51.996299 systemd[1]: sshd@28-172.31.29.86:22-139.178.68.195:56224.service: Deactivated successfully. May 27 03:21:51.999572 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:21:52.003012 systemd-logind[1869]: Session 26 logged out. Waiting for processes to exit. May 27 03:21:52.004530 systemd-logind[1869]: Removed session 26. May 27 03:21:52.169301 containerd[1881]: time="2025-05-27T03:21:52.169248539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"83cabedffabf4f179f09eb2677334d526c38229107b01ecb1c8231cfe295c638\" pid:6188 exited_at:{seconds:1748316112 nanos:168880378}" May 27 03:21:54.955414 containerd[1881]: time="2025-05-27T03:21:54.955383972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:21:55.159390 containerd[1881]: time="2025-05-27T03:21:55.159329593Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:55.161511 containerd[1881]: time="2025-05-27T03:21:55.161459115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:55.161692 containerd[1881]: time="2025-05-27T03:21:55.161561286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:21:55.172438 kubelet[3223]: E0527 03:21:55.171156 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:55.175062 kubelet[3223]: E0527 03:21:55.175016 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:55.194141 kubelet[3223]: E0527 03:21:55.194056 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:434808263c7f46d09d8c48091d884722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:55.196340 containerd[1881]: time="2025-05-27T03:21:55.196299988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:21:55.373942 containerd[1881]: time="2025-05-27T03:21:55.373875690Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:55.375969 containerd[1881]: time="2025-05-27T03:21:55.375922275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:55.376556 containerd[1881]: time="2025-05-27T03:21:55.375950715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:21:55.376602 kubelet[3223]: E0527 03:21:55.376175 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:55.376602 kubelet[3223]: E0527 03:21:55.376229 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:55.376602 kubelet[3223]: E0527 03:21:55.376341 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cffdd558-mvblc_calico-system(594fafad-e868-4ec4-9b96-4b356334c568): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:55.378520 kubelet[3223]: E0527 03:21:55.378434 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:22:00.952503 kubelet[3223]: E0527 03:22:00.952354 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:22:06.835824 systemd[1]: cri-containerd-8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe.scope: Deactivated successfully. May 27 03:22:06.836611 systemd[1]: cri-containerd-8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe.scope: Consumed 3.634s CPU time, 84.6M memory peak, 103.9M read from disk. May 27 03:22:06.918640 containerd[1881]: time="2025-05-27T03:22:06.918590819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe\" id:\"8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe\" pid:3036 exit_status:1 exited_at:{seconds:1748316126 nanos:889841174}" May 27 03:22:06.933592 containerd[1881]: time="2025-05-27T03:22:06.933539840Z" level=info msg="received exit event container_id:\"8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe\" id:\"8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe\" pid:3036 exit_status:1 exited_at:{seconds:1748316126 nanos:889841174}" May 27 03:22:07.021649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe-rootfs.mount: Deactivated successfully. May 27 03:22:07.423358 kubelet[3223]: I0527 03:22:07.423308 3223 scope.go:117] "RemoveContainer" containerID="8b03396b44308e5150b05bca304d868eb608469c4f08c23f9df11fa19b8904fe" May 27 03:22:07.436307 systemd[1]: cri-containerd-5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956.scope: Deactivated successfully. May 27 03:22:07.436583 systemd[1]: cri-containerd-5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956.scope: Consumed 14.251s CPU time, 109.1M memory peak, 81.2M read from disk. May 27 03:22:07.439652 containerd[1881]: time="2025-05-27T03:22:07.439623368Z" level=info msg="received exit event container_id:\"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\" id:\"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\" pid:3810 exit_status:1 exited_at:{seconds:1748316127 nanos:439030230}" May 27 03:22:07.439978 containerd[1881]: time="2025-05-27T03:22:07.439714478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\" id:\"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\" pid:3810 exit_status:1 exited_at:{seconds:1748316127 nanos:439030230}" May 27 03:22:07.473479 containerd[1881]: time="2025-05-27T03:22:07.473426742Z" level=info msg="CreateContainer within sandbox \"6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 27 03:22:07.481710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956-rootfs.mount: Deactivated successfully. May 27 03:22:07.593918 containerd[1881]: time="2025-05-27T03:22:07.593867645Z" level=info msg="Container b895974de9067a1befb3f1642e9e55d311cae200a150a7121888f61a16cc5dc2: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:07.595615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2433784714.mount: Deactivated successfully. May 27 03:22:07.625358 containerd[1881]: time="2025-05-27T03:22:07.625266671Z" level=info msg="CreateContainer within sandbox \"6ca2a379b85f5d3e9af7c00333985c206fdbf3b7e00d31cf768ce1674ed05360\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b895974de9067a1befb3f1642e9e55d311cae200a150a7121888f61a16cc5dc2\"" May 27 03:22:07.635393 containerd[1881]: time="2025-05-27T03:22:07.635354531Z" level=info msg="StartContainer for \"b895974de9067a1befb3f1642e9e55d311cae200a150a7121888f61a16cc5dc2\"" May 27 03:22:07.636709 containerd[1881]: time="2025-05-27T03:22:07.636531729Z" level=info msg="connecting to shim b895974de9067a1befb3f1642e9e55d311cae200a150a7121888f61a16cc5dc2" address="unix:///run/containerd/s/2b21ccd8259ff4f04e018029106945ce8830922396f44bc2d3df48c2be7542fd" protocol=ttrpc version=3 May 27 03:22:07.725113 systemd[1]: Started cri-containerd-b895974de9067a1befb3f1642e9e55d311cae200a150a7121888f61a16cc5dc2.scope - libcontainer container b895974de9067a1befb3f1642e9e55d311cae200a150a7121888f61a16cc5dc2. May 27 03:22:07.822561 containerd[1881]: time="2025-05-27T03:22:07.822521461Z" level=info msg="StartContainer for \"b895974de9067a1befb3f1642e9e55d311cae200a150a7121888f61a16cc5dc2\" returns successfully" May 27 03:22:08.425962 kubelet[3223]: I0527 03:22:08.425929 3223 scope.go:117] "RemoveContainer" containerID="5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956" May 27 03:22:08.439092 containerd[1881]: time="2025-05-27T03:22:08.438689564Z" level=info msg="CreateContainer within sandbox \"3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 27 03:22:08.459221 containerd[1881]: time="2025-05-27T03:22:08.458313252Z" level=info msg="Container 9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:08.474693 containerd[1881]: time="2025-05-27T03:22:08.474637321Z" level=info msg="CreateContainer within sandbox \"3cfb0874d94eddac9cdfd5ea2086c4dba22f85a271839e13d8163e2fb3a94d0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531\"" May 27 03:22:08.475137 containerd[1881]: time="2025-05-27T03:22:08.475095578Z" level=info msg="StartContainer for \"9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531\"" May 27 03:22:08.476766 containerd[1881]: time="2025-05-27T03:22:08.476578463Z" level=info msg="connecting to shim 9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531" address="unix:///run/containerd/s/b9de8f355ad8ec20e6b5605dea5f51d9b6c035edec69c7d4e71489ca6f11a3fc" protocol=ttrpc version=3 May 27 03:22:08.516351 systemd[1]: Started cri-containerd-9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531.scope - libcontainer container 9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531. May 27 03:22:08.571095 containerd[1881]: time="2025-05-27T03:22:08.570223278Z" level=info msg="StartContainer for \"9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531\" returns successfully" May 27 03:22:08.952192 kubelet[3223]: E0527 03:22:08.952152 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:22:11.951670 containerd[1881]: time="2025-05-27T03:22:11.951601724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:22:12.126852 containerd[1881]: time="2025-05-27T03:22:12.126797482Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:22:12.129303 containerd[1881]: time="2025-05-27T03:22:12.129110534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:22:12.129499 containerd[1881]: time="2025-05-27T03:22:12.129135173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:22:12.129745 kubelet[3223]: E0527 03:22:12.129701 3223 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:22:12.129745 kubelet[3223]: E0527 03:22:12.129754 3223 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:22:12.130336 kubelet[3223]: E0527 03:22:12.129941 3223 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggznq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-bwzz8_calico-system(4bb469da-f9bf-4f89-a896-e1bdeeceda6d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:22:12.131219 kubelet[3223]: E0527 03:22:12.131156 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-bwzz8" podUID="4bb469da-f9bf-4f89-a896-e1bdeeceda6d" May 27 03:22:13.018289 containerd[1881]: time="2025-05-27T03:22:13.018147689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"3ccb1a1691408452a8af55602e03c0a71e57e9648c4caade40d76de6229ce177\" pid:6329 exit_status:1 exited_at:{seconds:1748316133 nanos:17811856}" May 27 03:22:13.102422 systemd[1]: cri-containerd-10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1.scope: Deactivated successfully. May 27 03:22:13.103253 systemd[1]: cri-containerd-10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1.scope: Consumed 1.823s CPU time, 37.8M memory peak, 67.9M read from disk. May 27 03:22:13.107718 containerd[1881]: time="2025-05-27T03:22:13.107512952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1\" id:\"10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1\" pid:3064 exit_status:1 exited_at:{seconds:1748316133 nanos:106943944}" May 27 03:22:13.108337 containerd[1881]: time="2025-05-27T03:22:13.108302909Z" level=info msg="received exit event container_id:\"10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1\" id:\"10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1\" pid:3064 exit_status:1 exited_at:{seconds:1748316133 nanos:106943944}" May 27 03:22:13.143778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1-rootfs.mount: Deactivated successfully. May 27 03:22:13.453087 kubelet[3223]: I0527 03:22:13.453001 3223 scope.go:117] "RemoveContainer" containerID="10b32f04513f888c68d2b732b096df304de3ccb549235b5a62e6eefcb5749ba1" May 27 03:22:13.455891 containerd[1881]: time="2025-05-27T03:22:13.455859090Z" level=info msg="CreateContainer within sandbox \"87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 27 03:22:13.480243 containerd[1881]: time="2025-05-27T03:22:13.480203693Z" level=info msg="Container 3d271d3d000a67348432a343d35344d545704b81b38d6e57d062740d433f4cbb: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:13.483782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1163499560.mount: Deactivated successfully. May 27 03:22:13.494612 containerd[1881]: time="2025-05-27T03:22:13.494565862Z" level=info msg="CreateContainer within sandbox \"87758cdfe83da633f7f1d0035874f27a8b3fb15f04069a6f75c39b937f6c96ae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3d271d3d000a67348432a343d35344d545704b81b38d6e57d062740d433f4cbb\"" May 27 03:22:13.495457 containerd[1881]: time="2025-05-27T03:22:13.495360782Z" level=info msg="StartContainer for \"3d271d3d000a67348432a343d35344d545704b81b38d6e57d062740d433f4cbb\"" May 27 03:22:13.496950 containerd[1881]: time="2025-05-27T03:22:13.496916498Z" level=info msg="connecting to shim 3d271d3d000a67348432a343d35344d545704b81b38d6e57d062740d433f4cbb" address="unix:///run/containerd/s/c48784f9d19d583ad7d20e9314650ba4633ac1e2eddc472a517428c37b65dcfd" protocol=ttrpc version=3 May 27 03:22:13.536269 systemd[1]: Started cri-containerd-3d271d3d000a67348432a343d35344d545704b81b38d6e57d062740d433f4cbb.scope - libcontainer container 3d271d3d000a67348432a343d35344d545704b81b38d6e57d062740d433f4cbb. May 27 03:22:13.574450 containerd[1881]: time="2025-05-27T03:22:13.574399994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40b8443997091d4ad4bdf30db64965314ae18d4063d28163723675a06131639e\" id:\"083c7f79f783f1145e78ad0d9d4bed08cb4d2ddb548472752970c0f51e2d6f59\" pid:6362 exited_at:{seconds:1748316133 nanos:573748511}" May 27 03:22:13.618339 containerd[1881]: time="2025-05-27T03:22:13.618302203Z" level=info msg="StartContainer for \"3d271d3d000a67348432a343d35344d545704b81b38d6e57d062740d433f4cbb\" returns successfully" May 27 03:22:14.425157 kubelet[3223]: E0527 03:22:14.424817 3223 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-86?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 27 03:22:16.584133 systemd[1]: sshd@11-172.31.29.86:22-46.235.84.183:37014.service: Deactivated successfully. May 27 03:22:20.140889 systemd[1]: cri-containerd-9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531.scope: Deactivated successfully. May 27 03:22:20.142266 systemd[1]: cri-containerd-9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531.scope: Consumed 297ms CPU time, 68.3M memory peak, 36.4M read from disk. May 27 03:22:20.144488 containerd[1881]: time="2025-05-27T03:22:20.144452753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531\" id:\"9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531\" pid:6297 exit_status:1 exited_at:{seconds:1748316140 nanos:143559935}" May 27 03:22:20.144919 containerd[1881]: time="2025-05-27T03:22:20.144601928Z" level=info msg="received exit event container_id:\"9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531\" id:\"9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531\" pid:6297 exit_status:1 exited_at:{seconds:1748316140 nanos:143559935}" May 27 03:22:20.169612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531-rootfs.mount: Deactivated successfully. May 27 03:22:20.480161 kubelet[3223]: I0527 03:22:20.480021 3223 scope.go:117] "RemoveContainer" containerID="5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956" May 27 03:22:20.481269 kubelet[3223]: I0527 03:22:20.480659 3223 scope.go:117] "RemoveContainer" containerID="9870fbef8fcc0ba6032563568ea4fb7c57b5334e3bca9cd74c220549b5d39531" May 27 03:22:20.481269 kubelet[3223]: E0527 03:22:20.480946 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7c5755cdcb-8g6mf_tigera-operator(03a46d5a-12bb-4765-908d-1477aa0a0fb3)\"" pod="tigera-operator/tigera-operator-7c5755cdcb-8g6mf" podUID="03a46d5a-12bb-4765-908d-1477aa0a0fb3" May 27 03:22:20.602155 containerd[1881]: time="2025-05-27T03:22:20.602094567Z" level=info msg="RemoveContainer for \"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\"" May 27 03:22:20.651028 containerd[1881]: time="2025-05-27T03:22:20.650962364Z" level=info msg="RemoveContainer for \"5c3ac1949fd7add99abad022a9d62f999dafc594d17bbdd0c6d52ce37437d956\" returns successfully" May 27 03:22:22.145881 containerd[1881]: time="2025-05-27T03:22:22.145838489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4baeb0577cee8635b95917ac54c319161280774412061c7b0391763691497ee6\" id:\"9656d14de514f21f598b153e95cf28bc5d14fcc04d66255151e313d9c2011321\" pid:6433 exit_status:1 exited_at:{seconds:1748316142 nanos:145403863}" May 27 03:22:22.951447 kubelet[3223]: E0527 03:22:22.951328 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cffdd558-mvblc" podUID="594fafad-e868-4ec4-9b96-4b356334c568" May 27 03:22:24.426095 kubelet[3223]: E0527 03:22:24.425337 3223 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-86?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"