May 27 03:22:54.930628 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:22:54.930667 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:54.930683 kernel: BIOS-provided physical RAM map: May 27 03:22:54.930695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:22:54.930706 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 27 03:22:54.930718 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 27 03:22:54.930732 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 27 03:22:54.930745 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 27 03:22:54.930760 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 27 03:22:54.930772 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 27 03:22:54.930784 kernel: NX (Execute Disable) protection: active May 27 03:22:54.930797 kernel: APIC: Static calls initialized May 27 03:22:54.930809 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 27 03:22:54.930822 kernel: extended physical RAM map: May 27 03:22:54.930841 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:22:54.930855 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 27 03:22:54.930869 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 27 03:22:54.930882 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 27 03:22:54.930896 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 27 03:22:54.930910 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 27 03:22:54.930923 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 27 03:22:54.930937 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 27 03:22:54.930951 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 27 03:22:54.930964 kernel: efi: EFI v2.7 by EDK II May 27 03:22:54.930981 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 27 03:22:54.930994 kernel: secureboot: Secure boot disabled May 27 03:22:54.931008 kernel: SMBIOS 2.7 present. May 27 03:22:54.931021 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 27 03:22:54.931035 kernel: DMI: Memory slots populated: 1/1 May 27 03:22:54.931064 kernel: Hypervisor detected: KVM May 27 03:22:54.931077 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:22:54.931091 kernel: kvm-clock: using sched offset of 5210070653 cycles May 27 03:22:54.931106 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:22:54.931119 kernel: tsc: Detected 2500.008 MHz processor May 27 03:22:54.931133 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:22:54.931150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:22:54.931163 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 27 03:22:54.931177 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:22:54.931191 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:22:54.931206 kernel: Using GB pages for direct mapping May 27 03:22:54.931225 kernel: ACPI: Early table checksum verification disabled May 27 03:22:54.931242 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 27 03:22:54.931257 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 27 03:22:54.931272 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 27 03:22:54.931287 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 27 03:22:54.931302 kernel: ACPI: FACS 0x00000000789D0000 000040 May 27 03:22:54.931317 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 27 03:22:54.931480 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 27 03:22:54.931496 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 27 03:22:54.931514 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 27 03:22:54.931529 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 27 03:22:54.931544 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 27 03:22:54.931559 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 27 03:22:54.931574 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 27 03:22:54.931589 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 27 03:22:54.931604 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 27 03:22:54.931618 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 27 03:22:54.931636 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 27 03:22:54.931650 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 27 03:22:54.931663 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 27 03:22:54.931676 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 27 03:22:54.931703 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 27 03:22:54.931715 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 27 03:22:54.931728 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 27 03:22:54.931742 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 27 03:22:54.931755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 27 03:22:54.931772 kernel: NUMA: Initialized distance table, cnt=1 May 27 03:22:54.931786 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] May 27 03:22:54.931798 kernel: Zone ranges: May 27 03:22:54.931810 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:22:54.931824 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 27 03:22:54.931838 kernel: Normal empty May 27 03:22:54.931852 kernel: Device empty May 27 03:22:54.931864 kernel: Movable zone start for each node May 27 03:22:54.931877 kernel: Early memory node ranges May 27 03:22:54.931891 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 03:22:54.931907 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 27 03:22:54.931919 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 27 03:22:54.931939 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 27 03:22:54.931951 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:22:54.931964 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 03:22:54.931976 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 27 03:22:54.931990 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 27 03:22:54.932002 kernel: ACPI: PM-Timer IO Port: 0xb008 May 27 03:22:54.932017 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:22:54.932032 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 27 03:22:54.932044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:22:54.932057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:22:54.932070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:22:54.932084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:22:54.932098 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:22:54.932111 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:22:54.932124 kernel: TSC deadline timer available May 27 03:22:54.932138 kernel: CPU topo: Max. logical packages: 1 May 27 03:22:54.932155 kernel: CPU topo: Max. logical dies: 1 May 27 03:22:54.932170 kernel: CPU topo: Max. dies per package: 1 May 27 03:22:54.932184 kernel: CPU topo: Max. threads per core: 2 May 27 03:22:54.932198 kernel: CPU topo: Num. cores per package: 1 May 27 03:22:54.932213 kernel: CPU topo: Num. threads per package: 2 May 27 03:22:54.932227 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:22:54.932241 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:22:54.932256 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 27 03:22:54.932270 kernel: Booting paravirtualized kernel on KVM May 27 03:22:54.932285 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:22:54.932302 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:22:54.932317 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:22:54.932408 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:22:54.932423 kernel: pcpu-alloc: [0] 0 1 May 27 03:22:54.932437 kernel: kvm-guest: PV spinlocks enabled May 27 03:22:54.932452 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:22:54.932470 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:54.932485 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:22:54.932503 kernel: random: crng init done May 27 03:22:54.932517 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:22:54.932531 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 03:22:54.932544 kernel: Fallback order for Node 0: 0 May 27 03:22:54.932558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 May 27 03:22:54.932571 kernel: Policy zone: DMA32 May 27 03:22:54.932598 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:22:54.932613 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:22:54.932628 kernel: Kernel/User page tables isolation: enabled May 27 03:22:54.932642 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:22:54.932657 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:22:54.932674 kernel: Dynamic Preempt: voluntary May 27 03:22:54.932689 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:22:54.932705 kernel: rcu: RCU event tracing is enabled. May 27 03:22:54.932719 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:22:54.932734 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:22:54.932749 kernel: Rude variant of Tasks RCU enabled. May 27 03:22:54.932765 kernel: Tracing variant of Tasks RCU enabled. May 27 03:22:54.932780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:22:54.932794 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:22:54.932810 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:22:54.932825 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:22:54.932840 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:22:54.932855 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 03:22:54.932869 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:22:54.932886 kernel: Console: colour dummy device 80x25 May 27 03:22:54.932901 kernel: printk: legacy console [tty0] enabled May 27 03:22:54.932916 kernel: printk: legacy console [ttyS0] enabled May 27 03:22:54.932930 kernel: ACPI: Core revision 20240827 May 27 03:22:54.932946 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 27 03:22:54.932960 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:22:54.932974 kernel: x2apic enabled May 27 03:22:54.932990 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:22:54.933004 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2409413c780, max_idle_ns: 440795222072 ns May 27 03:22:54.933022 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500008) May 27 03:22:54.933037 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 27 03:22:54.933051 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 27 03:22:54.933065 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:22:54.933080 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:22:54.933094 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:22:54.933108 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 27 03:22:54.933123 kernel: RETBleed: Vulnerable May 27 03:22:54.933137 kernel: Speculative Store Bypass: Vulnerable May 27 03:22:54.933151 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 27 03:22:54.933165 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 03:22:54.933182 kernel: GDS: Unknown: Dependent on hypervisor status May 27 03:22:54.933196 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 03:22:54.933211 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:22:54.933225 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:22:54.933240 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:22:54.933255 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 27 03:22:54.933269 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 27 03:22:54.933283 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 27 03:22:54.933298 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 27 03:22:54.933312 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 27 03:22:54.933354 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 27 03:22:54.933368 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:22:54.933388 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 27 03:22:54.933402 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 27 03:22:54.933417 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 27 03:22:54.933431 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 27 03:22:54.933446 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 27 03:22:54.933460 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 27 03:22:54.933475 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 27 03:22:54.933490 kernel: Freeing SMP alternatives memory: 32K May 27 03:22:54.933504 kernel: pid_max: default: 32768 minimum: 301 May 27 03:22:54.933518 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:22:54.933535 kernel: landlock: Up and running. May 27 03:22:54.933550 kernel: SELinux: Initializing. May 27 03:22:54.933564 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 03:22:54.933577 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 03:22:54.933592 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 27 03:22:54.933607 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 27 03:22:54.933620 kernel: signal: max sigframe size: 3632 May 27 03:22:54.933633 kernel: rcu: Hierarchical SRCU implementation. May 27 03:22:54.933647 kernel: rcu: Max phase no-delay instances is 400. May 27 03:22:54.933662 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:22:54.933680 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 03:22:54.933693 kernel: smp: Bringing up secondary CPUs ... May 27 03:22:54.933711 kernel: smpboot: x86: Booting SMP configuration: May 27 03:22:54.933742 kernel: .... node #0, CPUs: #1 May 27 03:22:54.933756 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 27 03:22:54.933771 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 27 03:22:54.933787 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:22:54.933800 kernel: smpboot: Total of 2 processors activated (10000.03 BogoMIPS) May 27 03:22:54.933819 kernel: Memory: 1908048K/2037804K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125192K reserved, 0K cma-reserved) May 27 03:22:54.933836 kernel: devtmpfs: initialized May 27 03:22:54.933851 kernel: x86/mm: Memory block size: 128MB May 27 03:22:54.933866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 27 03:22:54.933882 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:22:54.933898 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:22:54.933913 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:22:54.933929 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:22:54.933944 kernel: audit: initializing netlink subsys (disabled) May 27 03:22:54.933963 kernel: audit: type=2000 audit(1748316172.355:1): state=initialized audit_enabled=0 res=1 May 27 03:22:54.933978 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:22:54.933994 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:22:54.934009 kernel: cpuidle: using governor menu May 27 03:22:54.934025 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:22:54.934040 kernel: dca service started, version 1.12.1 May 27 03:22:54.934056 kernel: PCI: Using configuration type 1 for base access May 27 03:22:54.934071 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:22:54.934085 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:22:54.934104 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:22:54.934118 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:22:54.934133 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:22:54.934148 kernel: ACPI: Added _OSI(Module Device) May 27 03:22:54.934162 kernel: ACPI: Added _OSI(Processor Device) May 27 03:22:54.934176 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:22:54.934191 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:22:54.934205 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 27 03:22:54.934219 kernel: ACPI: Interpreter enabled May 27 03:22:54.934237 kernel: ACPI: PM: (supports S0 S5) May 27 03:22:54.934252 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:22:54.934266 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:22:54.934282 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:22:54.934296 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 03:22:54.934311 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:22:54.934541 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 03:22:54.934672 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 03:22:54.934800 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 03:22:54.934817 kernel: acpiphp: Slot [3] registered May 27 03:22:54.934848 kernel: acpiphp: Slot [4] registered May 27 03:22:54.934862 kernel: acpiphp: Slot [5] registered May 27 03:22:54.934877 kernel: acpiphp: Slot [6] registered May 27 03:22:54.934891 kernel: acpiphp: Slot [7] registered May 27 03:22:54.934913 kernel: acpiphp: Slot [8] registered May 27 03:22:54.934928 kernel: acpiphp: Slot [9] registered May 27 03:22:54.934942 kernel: acpiphp: Slot [10] registered May 27 03:22:54.934959 kernel: acpiphp: Slot [11] registered May 27 03:22:54.934973 kernel: acpiphp: Slot [12] registered May 27 03:22:54.934992 kernel: acpiphp: Slot [13] registered May 27 03:22:54.935009 kernel: acpiphp: Slot [14] registered May 27 03:22:54.935023 kernel: acpiphp: Slot [15] registered May 27 03:22:54.935038 kernel: acpiphp: Slot [16] registered May 27 03:22:54.935055 kernel: acpiphp: Slot [17] registered May 27 03:22:54.935068 kernel: acpiphp: Slot [18] registered May 27 03:22:54.935081 kernel: acpiphp: Slot [19] registered May 27 03:22:54.935098 kernel: acpiphp: Slot [20] registered May 27 03:22:54.935109 kernel: acpiphp: Slot [21] registered May 27 03:22:54.935121 kernel: acpiphp: Slot [22] registered May 27 03:22:54.935134 kernel: acpiphp: Slot [23] registered May 27 03:22:54.935147 kernel: acpiphp: Slot [24] registered May 27 03:22:54.935161 kernel: acpiphp: Slot [25] registered May 27 03:22:54.935176 kernel: acpiphp: Slot [26] registered May 27 03:22:54.935190 kernel: acpiphp: Slot [27] registered May 27 03:22:54.935206 kernel: acpiphp: Slot [28] registered May 27 03:22:54.935220 kernel: acpiphp: Slot [29] registered May 27 03:22:54.935239 kernel: acpiphp: Slot [30] registered May 27 03:22:54.935253 kernel: acpiphp: Slot [31] registered May 27 03:22:54.935268 kernel: PCI host bridge to bus 0000:00 May 27 03:22:54.936522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:22:54.936663 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:22:54.936786 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:22:54.936908 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 27 03:22:54.937034 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 27 03:22:54.937152 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:22:54.937303 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 03:22:54.939525 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 03:22:54.939678 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint May 27 03:22:54.939819 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 27 03:22:54.939960 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 27 03:22:54.940096 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 27 03:22:54.940229 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 27 03:22:54.940378 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 27 03:22:54.940514 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 27 03:22:54.940648 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 27 03:22:54.940788 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:22:54.940928 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] May 27 03:22:54.941062 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:22:54.941196 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:22:54.942402 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint May 27 03:22:54.942564 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] May 27 03:22:54.942716 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint May 27 03:22:54.942852 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] May 27 03:22:54.942878 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:22:54.942894 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:22:54.942908 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:22:54.942923 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:22:54.942938 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 03:22:54.942953 kernel: iommu: Default domain type: Translated May 27 03:22:54.942969 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:22:54.942984 kernel: efivars: Registered efivars operations May 27 03:22:54.943002 kernel: PCI: Using ACPI for IRQ routing May 27 03:22:54.943017 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:22:54.943032 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 27 03:22:54.943047 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 27 03:22:54.943062 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 27 03:22:54.943207 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 27 03:22:54.944379 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 27 03:22:54.944531 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:22:54.944550 kernel: vgaarb: loaded May 27 03:22:54.944569 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 27 03:22:54.944583 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 27 03:22:54.944597 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:22:54.944611 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:22:54.944625 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:22:54.944639 kernel: pnp: PnP ACPI init May 27 03:22:54.944652 kernel: pnp: PnP ACPI: found 5 devices May 27 03:22:54.944667 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:22:54.944680 kernel: NET: Registered PF_INET protocol family May 27 03:22:54.944697 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:22:54.944711 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 03:22:54.944725 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:22:54.944739 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 03:22:54.944752 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 03:22:54.944766 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 03:22:54.944780 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 03:22:54.944794 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 03:22:54.944807 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:22:54.944824 kernel: NET: Registered PF_XDP protocol family May 27 03:22:54.944942 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:22:54.945059 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:22:54.945171 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:22:54.945281 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 27 03:22:54.946452 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 27 03:22:54.946606 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 03:22:54.946628 kernel: PCI: CLS 0 bytes, default 64 May 27 03:22:54.946649 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 03:22:54.946666 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2409413c780, max_idle_ns: 440795222072 ns May 27 03:22:54.946682 kernel: clocksource: Switched to clocksource tsc May 27 03:22:54.946697 kernel: Initialise system trusted keyrings May 27 03:22:54.946713 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 03:22:54.946729 kernel: Key type asymmetric registered May 27 03:22:54.946744 kernel: Asymmetric key parser 'x509' registered May 27 03:22:54.946759 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:22:54.946775 kernel: io scheduler mq-deadline registered May 27 03:22:54.946793 kernel: io scheduler kyber registered May 27 03:22:54.946808 kernel: io scheduler bfq registered May 27 03:22:54.946824 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:22:54.946840 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:22:54.946856 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:22:54.946871 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:22:54.946887 kernel: i8042: Warning: Keylock active May 27 03:22:54.946902 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:22:54.946918 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:22:54.947063 kernel: rtc_cmos 00:00: RTC can wake from S4 May 27 03:22:54.947190 kernel: rtc_cmos 00:00: registered as rtc0 May 27 03:22:54.947314 kernel: rtc_cmos 00:00: setting system clock to 2025-05-27T03:22:54 UTC (1748316174) May 27 03:22:54.948501 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 27 03:22:54.948552 kernel: intel_pstate: CPU model not supported May 27 03:22:54.948574 kernel: efifb: probing for efifb May 27 03:22:54.948592 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 27 03:22:54.948613 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 27 03:22:54.948630 kernel: efifb: scrolling: redraw May 27 03:22:54.948647 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:22:54.948664 kernel: Console: switching to colour frame buffer device 100x37 May 27 03:22:54.948681 kernel: fb0: EFI VGA frame buffer device May 27 03:22:54.948699 kernel: pstore: Using crash dump compression: deflate May 27 03:22:54.948717 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:22:54.948733 kernel: NET: Registered PF_INET6 protocol family May 27 03:22:54.948749 kernel: Segment Routing with IPv6 May 27 03:22:54.948764 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:22:54.948785 kernel: NET: Registered PF_PACKET protocol family May 27 03:22:54.948801 kernel: Key type dns_resolver registered May 27 03:22:54.948817 kernel: IPI shorthand broadcast: enabled May 27 03:22:54.948834 kernel: sched_clock: Marking stable (2635003010, 156785616)->(2885830281, -94041655) May 27 03:22:54.948849 kernel: registered taskstats version 1 May 27 03:22:54.948864 kernel: Loading compiled-in X.509 certificates May 27 03:22:54.948879 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:22:54.948895 kernel: Demotion targets for Node 0: null May 27 03:22:54.948910 kernel: Key type .fscrypt registered May 27 03:22:54.948929 kernel: Key type fscrypt-provisioning registered May 27 03:22:54.948946 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:22:54.948962 kernel: ima: Allocated hash algorithm: sha1 May 27 03:22:54.948977 kernel: ima: No architecture policies found May 27 03:22:54.948992 kernel: clk: Disabling unused clocks May 27 03:22:54.949011 kernel: Warning: unable to open an initial console. May 27 03:22:54.949027 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:22:54.949043 kernel: Write protecting the kernel read-only data: 24576k May 27 03:22:54.949061 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:22:54.949081 kernel: Run /init as init process May 27 03:22:54.949098 kernel: with arguments: May 27 03:22:54.949115 kernel: /init May 27 03:22:54.949131 kernel: with environment: May 27 03:22:54.949148 kernel: HOME=/ May 27 03:22:54.949169 kernel: TERM=linux May 27 03:22:54.949186 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:22:54.949206 systemd[1]: Successfully made /usr/ read-only. May 27 03:22:54.949229 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:22:54.949248 systemd[1]: Detected virtualization amazon. May 27 03:22:54.949266 systemd[1]: Detected architecture x86-64. May 27 03:22:54.949283 systemd[1]: Running in initrd. May 27 03:22:54.949304 systemd[1]: No hostname configured, using default hostname. May 27 03:22:54.950431 systemd[1]: Hostname set to . May 27 03:22:54.950456 systemd[1]: Initializing machine ID from VM UUID. May 27 03:22:54.950474 systemd[1]: Queued start job for default target initrd.target. May 27 03:22:54.950492 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:22:54.950510 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:22:54.950529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:22:54.950547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:22:54.950570 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:22:54.950588 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:22:54.950608 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:22:54.950627 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:22:54.950644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:22:54.950662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:22:54.950679 systemd[1]: Reached target paths.target - Path Units. May 27 03:22:54.950700 systemd[1]: Reached target slices.target - Slice Units. May 27 03:22:54.950718 systemd[1]: Reached target swap.target - Swaps. May 27 03:22:54.950735 systemd[1]: Reached target timers.target - Timer Units. May 27 03:22:54.950753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:22:54.950770 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:22:54.950788 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:22:54.950805 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:22:54.950823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:22:54.950844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:22:54.950861 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:22:54.950879 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:22:54.950896 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:22:54.950914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:22:54.950931 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:22:54.950950 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:22:54.950967 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:22:54.950985 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:22:54.951005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:22:54.951023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:54.951040 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:22:54.951059 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:22:54.951080 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:22:54.951098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:22:54.951149 systemd-journald[207]: Collecting audit messages is disabled. May 27 03:22:54.951188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:54.951209 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:22:54.951228 systemd-journald[207]: Journal started May 27 03:22:54.951263 systemd-journald[207]: Runtime Journal (/run/log/journal/ec232199940d029d98a5560092533d6c) is 4.8M, max 38.4M, 33.6M free. May 27 03:22:54.931971 systemd-modules-load[208]: Inserted module 'overlay' May 27 03:22:54.956461 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:22:54.956946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:22:54.963464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:22:54.967463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:22:54.989366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:22:54.990758 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:22:54.998362 kernel: Bridge firewalling registered May 27 03:22:54.997982 systemd-modules-load[208]: Inserted module 'br_netfilter' May 27 03:22:54.999886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:22:55.005118 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 27 03:22:55.005227 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:22:55.006776 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:22:55.009396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:22:55.011777 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:22:55.015501 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:22:55.029501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:22:55.032623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:22:55.039904 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:55.092478 systemd-resolved[253]: Positive Trust Anchors: May 27 03:22:55.093525 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:22:55.093595 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:22:55.100802 systemd-resolved[253]: Defaulting to hostname 'linux'. May 27 03:22:55.103709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:22:55.104465 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:22:55.138365 kernel: SCSI subsystem initialized May 27 03:22:55.148348 kernel: Loading iSCSI transport class v2.0-870. May 27 03:22:55.159350 kernel: iscsi: registered transport (tcp) May 27 03:22:55.181897 kernel: iscsi: registered transport (qla4xxx) May 27 03:22:55.181981 kernel: QLogic iSCSI HBA Driver May 27 03:22:55.200059 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:22:55.215509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:22:55.218686 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:22:55.260792 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:22:55.262975 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:22:55.320353 kernel: raid6: avx512x4 gen() 16447 MB/s May 27 03:22:55.338350 kernel: raid6: avx512x2 gen() 16699 MB/s May 27 03:22:55.356345 kernel: raid6: avx512x1 gen() 17083 MB/s May 27 03:22:55.374344 kernel: raid6: avx2x4 gen() 16942 MB/s May 27 03:22:55.392344 kernel: raid6: avx2x2 gen() 16773 MB/s May 27 03:22:55.410645 kernel: raid6: avx2x1 gen() 12533 MB/s May 27 03:22:55.410690 kernel: raid6: using algorithm avx512x1 gen() 17083 MB/s May 27 03:22:55.429554 kernel: raid6: .... xor() 21683 MB/s, rmw enabled May 27 03:22:55.429600 kernel: raid6: using avx512x2 recovery algorithm May 27 03:22:55.450349 kernel: xor: automatically using best checksumming function avx May 27 03:22:55.618357 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:22:55.624562 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:22:55.626678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:22:55.659244 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 27 03:22:55.665808 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:22:55.669636 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:22:55.696084 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation May 27 03:22:55.722313 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:22:55.724428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:22:55.779283 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:22:55.782049 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:22:55.836456 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 27 03:22:55.836680 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 27 03:22:55.847365 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 27 03:22:55.856341 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 27 03:22:55.860350 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:66:50:c6:75:a9 May 27 03:22:55.867392 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:22:55.876032 kernel: nvme nvme0: pci function 0000:00:04.0 May 27 03:22:55.876232 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 03:22:55.877905 (udev-worker)[501]: Network interface NamePolicy= disabled on kernel command line. May 27 03:22:55.881160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:22:55.881747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:55.882764 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:55.885515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:55.888380 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 03:22:55.893011 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:22:55.897931 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:22:55.897971 kernel: GPT:9289727 != 16777215 May 27 03:22:55.897985 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:22:55.897813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:22:55.903605 kernel: GPT:9289727 != 16777215 May 27 03:22:55.903629 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:22:55.903648 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:22:55.897911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:55.904287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:55.910348 kernel: AES CTR mode by8 optimization enabled May 27 03:22:55.941371 kernel: nvme nvme0: using unchecked data buffer May 27 03:22:55.949095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:56.035116 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 27 03:22:56.051693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 03:22:56.061126 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:22:56.079649 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 27 03:22:56.088308 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 27 03:22:56.088862 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 27 03:22:56.090069 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:22:56.090997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:22:56.091973 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:22:56.093447 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:22:56.096443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:22:56.108197 disk-uuid[691]: Primary Header is updated. May 27 03:22:56.108197 disk-uuid[691]: Secondary Entries is updated. May 27 03:22:56.108197 disk-uuid[691]: Secondary Header is updated. May 27 03:22:56.110935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:22:56.118351 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:22:56.125345 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:22:57.132873 disk-uuid[697]: The operation has completed successfully. May 27 03:22:57.134244 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:22:57.250310 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:22:57.250425 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:22:57.280576 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:22:57.293847 sh[957]: Success May 27 03:22:57.313452 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:22:57.313520 kernel: device-mapper: uevent: version 1.0.3 May 27 03:22:57.314620 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:22:57.326358 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 03:22:57.417661 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:22:57.420353 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:22:57.431916 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:22:57.457888 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:22:57.457944 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (981) May 27 03:22:57.463465 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:22:57.463516 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:57.465888 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:22:57.542775 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:22:57.543804 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:22:57.544496 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:22:57.545907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:22:57.547265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:22:57.591358 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1016) May 27 03:22:57.599342 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:57.599425 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:57.599448 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:22:57.612345 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:57.614411 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:22:57.616486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:22:57.654545 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:22:57.656986 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:22:57.694767 systemd-networkd[1150]: lo: Link UP May 27 03:22:57.694787 systemd-networkd[1150]: lo: Gained carrier May 27 03:22:57.696863 systemd-networkd[1150]: Enumeration completed May 27 03:22:57.697431 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:22:57.697542 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:57.697547 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:22:57.699784 systemd[1]: Reached target network.target - Network. May 27 03:22:57.703896 systemd-networkd[1150]: eth0: Link UP May 27 03:22:57.703902 systemd-networkd[1150]: eth0: Gained carrier May 27 03:22:57.703917 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:57.720907 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.28.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 03:22:58.041991 ignition[1101]: Ignition 2.21.0 May 27 03:22:58.042006 ignition[1101]: Stage: fetch-offline May 27 03:22:58.042177 ignition[1101]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:58.042186 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:22:58.042618 ignition[1101]: Ignition finished successfully May 27 03:22:58.043773 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:22:58.045590 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:22:58.072604 ignition[1160]: Ignition 2.21.0 May 27 03:22:58.072627 ignition[1160]: Stage: fetch May 27 03:22:58.072975 ignition[1160]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:58.072984 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:22:58.073089 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:22:58.101370 ignition[1160]: PUT result: OK May 27 03:22:58.104162 ignition[1160]: parsed url from cmdline: "" May 27 03:22:58.104171 ignition[1160]: no config URL provided May 27 03:22:58.104178 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:22:58.104190 ignition[1160]: no config at "/usr/lib/ignition/user.ign" May 27 03:22:58.104206 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:22:58.105202 ignition[1160]: PUT result: OK May 27 03:22:58.105254 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 27 03:22:58.108406 ignition[1160]: GET result: OK May 27 03:22:58.108556 ignition[1160]: parsing config with SHA512: 2b35eb5683a632505a76a25d3265e4154fe16a47739e8ccaf9eeba753f31da18c7be48ef56cd23655e32b5998eafb7eac3838a315eebc6741f50537e4cddeb19 May 27 03:22:58.116480 unknown[1160]: fetched base config from "system" May 27 03:22:58.116490 unknown[1160]: fetched base config from "system" May 27 03:22:58.116830 ignition[1160]: fetch: fetch complete May 27 03:22:58.116495 unknown[1160]: fetched user config from "aws" May 27 03:22:58.116836 ignition[1160]: fetch: fetch passed May 27 03:22:58.116874 ignition[1160]: Ignition finished successfully May 27 03:22:58.119053 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:22:58.120581 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:22:58.145867 ignition[1166]: Ignition 2.21.0 May 27 03:22:58.145881 ignition[1166]: Stage: kargs May 27 03:22:58.146175 ignition[1166]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:58.146183 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:22:58.146263 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:22:58.147217 ignition[1166]: PUT result: OK May 27 03:22:58.149569 ignition[1166]: kargs: kargs passed May 27 03:22:58.149636 ignition[1166]: Ignition finished successfully May 27 03:22:58.150989 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:22:58.152528 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:22:58.176004 ignition[1172]: Ignition 2.21.0 May 27 03:22:58.176018 ignition[1172]: Stage: disks May 27 03:22:58.176308 ignition[1172]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:58.176317 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:22:58.176429 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:22:58.177271 ignition[1172]: PUT result: OK May 27 03:22:58.180222 ignition[1172]: disks: disks passed May 27 03:22:58.180274 ignition[1172]: Ignition finished successfully May 27 03:22:58.182472 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:22:58.183066 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:22:58.183494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:22:58.184036 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:22:58.184604 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:22:58.185160 systemd[1]: Reached target basic.target - Basic System. May 27 03:22:58.186868 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:22:58.231477 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:22:58.234506 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:22:58.236153 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:22:58.385343 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:22:58.385893 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:22:58.386742 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:22:58.388586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:22:58.389962 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:22:58.391715 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:22:58.391763 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:22:58.391786 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:22:58.403630 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:22:58.405253 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:22:58.424353 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1200) May 27 03:22:58.428445 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:58.428509 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:58.428523 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:22:58.437056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:22:58.797232 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:22:58.825382 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory May 27 03:22:58.830614 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:22:58.835041 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:22:59.070980 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:22:59.072679 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:22:59.073805 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:22:59.093383 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:22:59.102812 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:59.124367 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:22:59.128683 ignition[1313]: INFO : Ignition 2.21.0 May 27 03:22:59.128683 ignition[1313]: INFO : Stage: mount May 27 03:22:59.129942 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:59.129942 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:22:59.129942 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:22:59.131186 ignition[1313]: INFO : PUT result: OK May 27 03:22:59.133016 ignition[1313]: INFO : mount: mount passed May 27 03:22:59.133016 ignition[1313]: INFO : Ignition finished successfully May 27 03:22:59.134574 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:22:59.136250 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:22:59.164948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:22:59.205354 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1326) May 27 03:22:59.209320 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:59.209397 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:59.209411 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:22:59.218060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:22:59.247342 ignition[1342]: INFO : Ignition 2.21.0 May 27 03:22:59.247342 ignition[1342]: INFO : Stage: files May 27 03:22:59.248840 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:59.248840 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:22:59.248840 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:22:59.250425 ignition[1342]: INFO : PUT result: OK May 27 03:22:59.252981 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping May 27 03:22:59.254555 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:22:59.254555 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:22:59.259311 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:22:59.260169 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:22:59.260751 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:22:59.260210 unknown[1342]: wrote ssh authorized keys file for user: core May 27 03:22:59.263097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:22:59.263726 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 03:22:59.358769 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:22:59.540073 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:22:59.540928 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:22:59.546791 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:22:59.547562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:22:59.547562 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:22:59.549370 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:22:59.549370 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:22:59.551351 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 03:22:59.687495 systemd-networkd[1150]: eth0: Gained IPv6LL May 27 03:23:00.283862 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:23:00.745965 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:23:00.745965 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:23:00.749580 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:23:00.754193 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:23:00.754193 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:23:00.754193 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 03:23:00.758080 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:23:00.758080 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:23:00.758080 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:23:00.758080 ignition[1342]: INFO : files: files passed May 27 03:23:00.758080 ignition[1342]: INFO : Ignition finished successfully May 27 03:23:00.756647 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:23:00.759364 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:23:00.763447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:23:00.771793 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:23:00.771895 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:23:00.786217 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:23:00.786217 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:23:00.790074 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:23:00.791209 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:23:00.791918 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:23:00.794136 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:23:00.851753 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:23:00.851877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:23:00.853269 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:23:00.854093 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:23:00.854898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:23:00.855760 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:23:00.881896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:23:00.883880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:23:00.907915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:23:00.908827 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:23:00.910217 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:23:00.911075 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:23:00.911349 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:23:00.912416 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:23:00.913318 systemd[1]: Stopped target basic.target - Basic System. May 27 03:23:00.914259 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:23:00.915032 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:23:00.915815 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:23:00.916668 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:23:00.917436 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:23:00.918393 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:23:00.919195 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:23:00.920294 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:23:00.921083 systemd[1]: Stopped target swap.target - Swaps. May 27 03:23:00.921901 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:23:00.922134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:23:00.923213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:23:00.924057 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:23:00.924750 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:23:00.924923 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:23:00.925577 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:23:00.925964 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:23:00.926970 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:23:00.927217 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:23:00.927876 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:23:00.928080 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:23:00.930423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:23:00.935631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:23:00.937051 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:23:00.938016 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:23:00.938812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:23:00.938978 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:23:00.947643 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:23:00.948468 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:23:00.966204 ignition[1396]: INFO : Ignition 2.21.0 May 27 03:23:00.966204 ignition[1396]: INFO : Stage: umount May 27 03:23:00.966204 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:23:00.966204 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 03:23:00.966204 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 03:23:00.969487 ignition[1396]: INFO : PUT result: OK May 27 03:23:00.972417 ignition[1396]: INFO : umount: umount passed May 27 03:23:00.973399 ignition[1396]: INFO : Ignition finished successfully May 27 03:23:00.975476 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:23:00.975628 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:23:00.977614 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:23:00.977685 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:23:00.978316 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:23:00.978398 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:23:00.979199 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:23:00.979272 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:23:00.980930 systemd[1]: Stopped target network.target - Network. May 27 03:23:00.981558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:23:00.981628 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:23:00.982428 systemd[1]: Stopped target paths.target - Path Units. May 27 03:23:00.983117 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:23:00.983450 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:23:00.984850 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:23:00.985560 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:23:00.986415 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:23:00.986470 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:23:00.988152 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:23:00.988206 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:23:00.988927 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:23:00.989005 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:23:00.989810 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:23:00.989873 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:23:00.991129 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:23:00.992514 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:23:00.995525 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:23:00.999598 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:23:00.999744 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:23:01.002258 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:23:01.002647 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:23:01.002787 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:23:01.006765 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:23:01.007861 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:23:01.008690 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:23:01.008741 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:23:01.010442 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:23:01.011423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:23:01.011503 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:23:01.012093 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:23:01.012160 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:23:01.014532 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:23:01.014598 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:23:01.015583 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:23:01.015645 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:23:01.016237 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:23:01.021087 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:23:01.021198 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:23:01.041120 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:23:01.041952 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:23:01.042732 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:23:01.042821 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:23:01.044186 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:23:01.044254 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:23:01.044831 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:23:01.044873 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:23:01.045932 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:23:01.046005 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:23:01.047073 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:23:01.047141 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:23:01.048318 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:23:01.048400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:23:01.051959 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:23:01.052509 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:23:01.052587 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:23:01.056432 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:23:01.056521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:23:01.057246 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:23:01.057310 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:23:01.058175 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:23:01.058247 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:23:01.059820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:23:01.059883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:23:01.063131 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:23:01.063218 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 03:23:01.063268 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:23:01.063338 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:23:01.071636 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:23:01.072189 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:23:01.144314 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:23:01.144471 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:23:01.146163 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:23:01.146782 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:23:01.146885 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:23:01.148873 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:23:01.169032 systemd[1]: Switching root. May 27 03:23:01.197203 systemd-journald[207]: Journal stopped May 27 03:23:02.943824 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). May 27 03:23:02.943909 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:23:02.943932 kernel: SELinux: policy capability open_perms=1 May 27 03:23:02.943956 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:23:02.943974 kernel: SELinux: policy capability always_check_network=0 May 27 03:23:02.943992 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:23:02.944011 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:23:02.944030 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:23:02.944059 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:23:02.944078 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:23:02.944097 kernel: audit: type=1403 audit(1748316181.548:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:23:02.944118 systemd[1]: Successfully loaded SELinux policy in 65.230ms. May 27 03:23:02.944154 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.425ms. May 27 03:23:02.944176 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:23:02.944197 systemd[1]: Detected virtualization amazon. May 27 03:23:02.944216 systemd[1]: Detected architecture x86-64. May 27 03:23:02.944235 systemd[1]: Detected first boot. May 27 03:23:02.944263 systemd[1]: Initializing machine ID from VM UUID. May 27 03:23:02.944286 zram_generator::config[1439]: No configuration found. May 27 03:23:02.944306 kernel: Guest personality initialized and is inactive May 27 03:23:02.944374 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:23:02.944393 kernel: Initialized host personality May 27 03:23:02.944411 kernel: NET: Registered PF_VSOCK protocol family May 27 03:23:02.944429 systemd[1]: Populated /etc with preset unit settings. May 27 03:23:02.944449 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:23:02.944473 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:23:02.944492 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:23:02.944510 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:23:02.944529 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:23:02.944547 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:23:02.944567 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:23:02.944586 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:23:02.944605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:23:02.944624 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:23:02.944645 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:23:02.944664 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:23:02.944682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:23:02.944701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:23:02.944718 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:23:02.944737 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:23:02.944756 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:23:02.944778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:23:02.944796 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:23:02.944816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:23:02.944834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:23:02.944853 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:23:02.944872 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:23:02.944891 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:23:02.944909 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:23:02.944927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:23:02.944947 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:23:02.944968 systemd[1]: Reached target slices.target - Slice Units. May 27 03:23:02.944991 systemd[1]: Reached target swap.target - Swaps. May 27 03:23:02.945010 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:23:02.945029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:23:02.945048 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:23:02.945066 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:23:02.945084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:23:02.945103 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:23:02.945121 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:23:02.945142 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:23:02.945161 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:23:02.945181 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:23:02.945200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:02.945218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:23:02.945236 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:23:02.945254 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:23:02.945274 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:23:02.945300 systemd[1]: Reached target machines.target - Containers. May 27 03:23:02.945346 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:23:02.945369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:23:02.945389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:23:02.945408 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:23:02.945428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:23:02.945450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:23:02.945473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:23:02.945494 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:23:02.945520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:23:02.945543 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:23:02.945563 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:23:02.945585 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:23:02.945606 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:23:02.945627 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:23:02.945650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:23:02.945672 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:23:02.945747 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:23:02.945775 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:23:02.945796 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:23:02.945819 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:23:02.945844 kernel: loop: module loaded May 27 03:23:02.945867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:23:02.945889 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:23:02.945911 systemd[1]: Stopped verity-setup.service. May 27 03:23:02.945934 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:02.945956 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:23:02.945981 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:23:02.946002 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:23:02.946024 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:23:02.946046 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:23:02.946068 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:23:02.946090 kernel: ACPI: bus type drm_connector registered May 27 03:23:02.946111 kernel: fuse: init (API version 7.41) May 27 03:23:02.946131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:23:02.946154 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:23:02.946180 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:23:02.946203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:23:02.946224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:23:02.946246 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:23:02.946268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:23:02.946289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:23:02.946312 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:23:02.946351 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:23:02.946373 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:23:02.946395 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:23:02.946454 systemd-journald[1518]: Collecting audit messages is disabled. May 27 03:23:02.946493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:23:02.946514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:23:02.946535 systemd-journald[1518]: Journal started May 27 03:23:02.946579 systemd-journald[1518]: Runtime Journal (/run/log/journal/ec232199940d029d98a5560092533d6c) is 4.8M, max 38.4M, 33.6M free. May 27 03:23:02.575626 systemd[1]: Queued start job for default target multi-user.target. May 27 03:23:02.588568 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 03:23:02.589072 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:23:02.949995 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:23:02.952622 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:23:02.954433 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:23:02.958962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:23:02.974091 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:23:02.980157 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:23:02.985531 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:23:02.991520 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:23:02.993106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:23:02.993307 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:23:02.996496 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:23:03.001290 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:23:03.003558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:23:03.006932 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:23:03.010598 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:23:03.011533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:23:03.016494 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:23:03.017272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:23:03.023493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:23:03.027744 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:23:03.033548 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:23:03.036734 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:23:03.038576 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:23:03.043376 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:23:03.046056 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:23:03.051483 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:23:03.060469 systemd-journald[1518]: Time spent on flushing to /var/log/journal/ec232199940d029d98a5560092533d6c is 99.868ms for 1022 entries. May 27 03:23:03.060469 systemd-journald[1518]: System Journal (/var/log/journal/ec232199940d029d98a5560092533d6c) is 8M, max 195.6M, 187.6M free. May 27 03:23:03.173218 systemd-journald[1518]: Received client request to flush runtime journal. May 27 03:23:03.173289 kernel: loop0: detected capacity change from 0 to 72352 May 27 03:23:03.121282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:23:03.124825 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. May 27 03:23:03.124846 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. May 27 03:23:03.135409 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:23:03.141977 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:23:03.146757 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:23:03.175696 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:23:03.203294 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:23:03.236294 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:23:03.263389 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:23:03.267388 kernel: loop1: detected capacity change from 0 to 113872 May 27 03:23:03.269735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:23:03.310182 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. May 27 03:23:03.310213 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. May 27 03:23:03.321394 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:23:03.379345 kernel: loop2: detected capacity change from 0 to 229808 May 27 03:23:03.501353 kernel: loop3: detected capacity change from 0 to 146240 May 27 03:23:03.598222 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:23:03.626353 kernel: loop4: detected capacity change from 0 to 72352 May 27 03:23:03.640707 kernel: loop5: detected capacity change from 0 to 113872 May 27 03:23:03.670355 kernel: loop6: detected capacity change from 0 to 229808 May 27 03:23:03.708382 kernel: loop7: detected capacity change from 0 to 146240 May 27 03:23:03.734710 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 27 03:23:03.735208 (sd-merge)[1600]: Merged extensions into '/usr'. May 27 03:23:03.740840 systemd[1]: Reload requested from client PID 1573 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:23:03.740977 systemd[1]: Reloading... May 27 03:23:03.817346 zram_generator::config[1628]: No configuration found. May 27 03:23:03.944506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:04.035944 systemd[1]: Reloading finished in 294 ms. May 27 03:23:04.050574 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:23:04.060474 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:23:04.072899 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:23:04.077467 systemd[1]: Starting ensure-sysext.service... May 27 03:23:04.079777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:23:04.105207 systemd-udevd[1677]: Using default interface naming scheme 'v255'. May 27 03:23:04.108533 systemd[1]: Reload requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... May 27 03:23:04.108553 systemd[1]: Reloading... May 27 03:23:04.115248 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:23:04.115275 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:23:04.116705 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:23:04.116951 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:23:04.118433 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:23:04.118694 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. May 27 03:23:04.118747 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. May 27 03:23:04.125952 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:23:04.125964 systemd-tmpfiles[1680]: Skipping /boot May 27 03:23:04.139420 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:23:04.139434 systemd-tmpfiles[1680]: Skipping /boot May 27 03:23:04.192343 zram_generator::config[1708]: No configuration found. May 27 03:23:04.375803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:04.380480 (udev-worker)[1715]: Network interface NamePolicy= disabled on kernel command line. May 27 03:23:04.482349 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 03:23:04.497353 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 27 03:23:04.502345 kernel: ACPI: button: Power Button [PWRF] May 27 03:23:04.504358 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 May 27 03:23:04.517367 kernel: ACPI: button: Sleep Button [SLPF] May 27 03:23:04.516529 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:23:04.516739 systemd[1]: Reloading finished in 407 ms. May 27 03:23:04.527202 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:23:04.527970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:23:04.538367 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:23:04.540522 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:23:04.544588 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:23:04.548096 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:23:04.553622 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:23:04.560683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:23:04.568566 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:23:04.628263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:23:04.637272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:04.637574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:23:04.641620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:23:04.644167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:23:04.648973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:23:04.649454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:23:04.649623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:23:04.649775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:04.651168 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:23:04.668744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:04.671078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:23:04.671903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:23:04.672458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:23:04.677211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:23:04.677919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:04.693763 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:23:04.707011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:23:04.708639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:23:04.711056 systemd[1]: Finished ensure-sysext.service. May 27 03:23:04.712052 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:23:04.713598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:23:04.714549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:23:04.714817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:23:04.730258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:04.731595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:23:04.733855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:23:04.735519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:23:04.735567 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:23:04.735616 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:23:04.735662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:23:04.735699 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:23:04.737139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:23:04.744668 ldconfig[1568]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:23:04.748757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:23:04.749946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:23:04.753521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:23:04.758464 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:23:04.763871 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:23:04.766678 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:23:04.766870 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:23:04.785495 augenrules[1867]: No rules May 27 03:23:04.786942 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:23:04.787153 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:23:04.797420 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:23:04.827474 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:23:04.828077 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:23:04.894041 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:23:04.932071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:23:05.014940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 03:23:05.019464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:23:05.026913 systemd-networkd[1806]: lo: Link UP May 27 03:23:05.026926 systemd-networkd[1806]: lo: Gained carrier May 27 03:23:05.028244 systemd-networkd[1806]: Enumeration completed May 27 03:23:05.028371 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:23:05.030427 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:23:05.034011 systemd-networkd[1806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:23:05.034020 systemd-networkd[1806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:23:05.034785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:23:05.041607 systemd-networkd[1806]: eth0: Link UP May 27 03:23:05.041773 systemd-networkd[1806]: eth0: Gained carrier May 27 03:23:05.041802 systemd-networkd[1806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:23:05.051717 systemd-resolved[1807]: Positive Trust Anchors: May 27 03:23:05.051736 systemd-resolved[1807]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:23:05.051778 systemd-resolved[1807]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:23:05.055396 systemd-networkd[1806]: eth0: DHCPv4 address 172.31.28.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 03:23:05.058481 systemd-resolved[1807]: Defaulting to hostname 'linux'. May 27 03:23:05.060804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:23:05.061281 systemd[1]: Reached target network.target - Network. May 27 03:23:05.061817 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:23:05.062166 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:23:05.062580 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:23:05.062918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:23:05.063227 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:23:05.063700 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:23:05.064084 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:23:05.064398 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:23:05.064682 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:23:05.064706 systemd[1]: Reached target paths.target - Path Units. May 27 03:23:05.064982 systemd[1]: Reached target timers.target - Timer Units. May 27 03:23:05.066719 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:23:05.068470 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:23:05.071599 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:23:05.072115 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:23:05.072478 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:23:05.074908 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:23:05.076316 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:23:05.077671 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:23:05.078487 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:23:05.078962 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:23:05.081598 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:23:05.081967 systemd[1]: Reached target basic.target - Basic System. May 27 03:23:05.082363 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:23:05.082394 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:23:05.083460 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:23:05.087451 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:23:05.095544 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:23:05.097570 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:23:05.101405 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:23:05.103541 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:23:05.104381 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:23:05.106105 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:23:05.112499 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:23:05.116524 systemd[1]: Started ntpd.service - Network Time Service. May 27 03:23:05.120438 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:23:05.129207 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 03:23:05.134531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:23:05.139505 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:23:05.142129 jq[1967]: false May 27 03:23:05.158110 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:23:05.162359 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:23:05.162883 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:23:05.166315 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:23:05.169606 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:23:05.174717 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:23:05.175646 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:23:05.176003 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:23:05.176956 extend-filesystems[1968]: Found loop4 May 27 03:23:05.177539 extend-filesystems[1968]: Found loop5 May 27 03:23:05.177539 extend-filesystems[1968]: Found loop6 May 27 03:23:05.177539 extend-filesystems[1968]: Found loop7 May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1 May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1p1 May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1p2 May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1p3 May 27 03:23:05.177539 extend-filesystems[1968]: Found usr May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1p4 May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1p6 May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1p7 May 27 03:23:05.177539 extend-filesystems[1968]: Found nvme0n1p9 May 27 03:23:05.177539 extend-filesystems[1968]: Checking size of /dev/nvme0n1p9 May 27 03:23:05.182642 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Refreshing passwd entry cache May 27 03:23:05.177998 oslogin_cache_refresh[1969]: Refreshing passwd entry cache May 27 03:23:05.199234 jq[1983]: true May 27 03:23:05.206231 update_engine[1979]: I20250527 03:23:05.206161 1979 main.cc:92] Flatcar Update Engine starting May 27 03:23:05.217985 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:23:05.218197 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:23:05.223774 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Failure getting users, quitting May 27 03:23:05.223774 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:23:05.223774 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Refreshing group entry cache May 27 03:23:05.223135 oslogin_cache_refresh[1969]: Failure getting users, quitting May 27 03:23:05.223155 oslogin_cache_refresh[1969]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:23:05.223203 oslogin_cache_refresh[1969]: Refreshing group entry cache May 27 03:23:05.224951 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Failure getting groups, quitting May 27 03:23:05.224951 google_oslogin_nss_cache[1969]: oslogin_cache_refresh[1969]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:23:05.224892 oslogin_cache_refresh[1969]: Failure getting groups, quitting May 27 03:23:05.224904 oslogin_cache_refresh[1969]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:23:05.227973 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:23:05.237531 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:23:05.238858 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:23:05.239050 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:23:05.243641 extend-filesystems[1968]: Resized partition /dev/nvme0n1p9 May 27 03:23:05.255367 jq[1999]: true May 27 03:23:05.264100 (ntainerd)[1990]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:23:05.278652 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:37:40 UTC 2025 (1): Starting May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:37:40 UTC 2025 (1): Starting May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: ---------------------------------------------------- May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: corporation. Support and training for ntp-4 are May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: available at https://www.nwtime.org/support May 27 03:23:05.285518 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: ---------------------------------------------------- May 27 03:23:05.278677 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 03:23:05.286727 extend-filesystems[2020]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:23:05.278685 ntpd[1971]: ---------------------------------------------------- May 27 03:23:05.278692 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, May 27 03:23:05.294642 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 27 03:23:05.278698 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 03:23:05.278704 ntpd[1971]: corporation. Support and training for ntp-4 are May 27 03:23:05.278712 ntpd[1971]: available at https://www.nwtime.org/support May 27 03:23:05.278719 ntpd[1971]: ---------------------------------------------------- May 27 03:23:05.295243 dbus-daemon[1965]: [system] SELinux support is enabled May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: proto: precision = 0.056 usec (-24) May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: basedate set to 2025-05-15 May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: gps base set to 2025-05-18 (week 2367) May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Listen normally on 3 eth0 172.31.28.64:123 May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Listen normally on 4 lo [::1]:123 May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: bind(21) AF_INET6 fe80::466:50ff:fec6:75a9%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:23:05.306669 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: unable to create socket on eth0 (5) for fe80::466:50ff:fec6:75a9%2#123 May 27 03:23:05.295377 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 03:23:05.295955 ntpd[1971]: proto: precision = 0.056 usec (-24) May 27 03:23:05.296529 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:23:05.303462 ntpd[1971]: basedate set to 2025-05-15 May 27 03:23:05.302887 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:23:05.303481 ntpd[1971]: gps base set to 2025-05-18 (week 2367) May 27 03:23:05.302913 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:23:05.305000 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 May 27 03:23:05.305453 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:23:05.305034 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 03:23:05.305476 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:23:05.305179 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 May 27 03:23:05.305207 ntpd[1971]: Listen normally on 3 eth0 172.31.28.64:123 May 27 03:23:05.305240 ntpd[1971]: Listen normally on 4 lo [::1]:123 May 27 03:23:05.305290 ntpd[1971]: bind(21) AF_INET6 fe80::466:50ff:fec6:75a9%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:23:05.305306 ntpd[1971]: unable to create socket on eth0 (5) for fe80::466:50ff:fec6:75a9%2#123 May 27 03:23:05.305318 ntpd[1971]: failed to init interface for address fe80::466:50ff:fec6:75a9%2 May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.307 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.309 INFO Fetch successful May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.310 INFO Fetch successful May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.311 INFO Fetch successful May 27 03:23:05.312009 coreos-metadata[1964]: May 27 03:23:05.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 27 03:23:05.389846 tar[1985]: linux-amd64/LICENSE May 27 03:23:05.389846 tar[1985]: linux-amd64/helm May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.314 INFO Fetch successful May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.319 INFO Fetch failed with 404: resource not found May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.325 INFO Fetch successful May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.326 INFO Fetch successful May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.326 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.329 INFO Fetch successful May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.329 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.334 INFO Fetch successful May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.334 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 27 03:23:05.390203 coreos-metadata[1964]: May 27 03:23:05.335 INFO Fetch successful May 27 03:23:05.390517 update_engine[1979]: I20250527 03:23:05.318861 1979 update_check_scheduler.cc:74] Next update check in 7m53s May 27 03:23:05.313421 ntpd[1971]: Listening on routing socket on fd #21 for interface updates May 27 03:23:05.318016 systemd[1]: Started update-engine.service - Update Engine. May 27 03:23:05.390636 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: failed to init interface for address fe80::466:50ff:fec6:75a9%2 May 27 03:23:05.390636 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: Listening on routing socket on fd #21 for interface updates May 27 03:23:05.390636 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:23:05.390636 ntpd[1971]: 27 May 03:23:05 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:23:05.320642 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1806 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 03:23:05.321923 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:23:05.322700 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:23:05.390807 systemd-logind[1977]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:23:05.413430 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 27 03:23:05.322727 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 03:23:05.390824 systemd-logind[1977]: Watching system buttons on /dev/input/event3 (Sleep Button) May 27 03:23:05.390841 systemd-logind[1977]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:23:05.391113 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 03:23:05.392596 systemd-logind[1977]: New seat seat0. May 27 03:23:05.403754 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:23:05.454187 extend-filesystems[2020]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 27 03:23:05.454187 extend-filesystems[2020]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:23:05.454187 extend-filesystems[2020]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 27 03:23:05.462676 extend-filesystems[1968]: Resized filesystem in /dev/nvme0n1p9 May 27 03:23:05.455521 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:23:05.473503 bash[2043]: Updated "/home/core/.ssh/authorized_keys" May 27 03:23:05.455747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:23:05.466647 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:23:05.482424 systemd[1]: Starting sshkeys.service... May 27 03:23:05.495815 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:23:05.499251 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:23:05.540924 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 03:23:05.546490 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 03:23:05.707961 coreos-metadata[2112]: May 27 03:23:05.707 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 03:23:05.718120 coreos-metadata[2112]: May 27 03:23:05.715 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 27 03:23:05.718120 coreos-metadata[2112]: May 27 03:23:05.717 INFO Fetch successful May 27 03:23:05.718120 coreos-metadata[2112]: May 27 03:23:05.718 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 03:23:05.726934 coreos-metadata[2112]: May 27 03:23:05.723 INFO Fetch successful May 27 03:23:05.730175 unknown[2112]: wrote ssh authorized keys file for user: core May 27 03:23:05.740861 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 03:23:05.746014 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 03:23:05.748493 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2077 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 03:23:05.761999 systemd[1]: Starting polkit.service - Authorization Manager... May 27 03:23:05.815700 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" May 27 03:23:05.821945 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 03:23:05.833133 systemd[1]: Finished sshkeys.service. May 27 03:23:05.903921 locksmithd[2032]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:23:05.981317 containerd[1990]: time="2025-05-27T03:23:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:23:05.981317 containerd[1990]: time="2025-05-27T03:23:05.980305708Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:23:06.003876 polkitd[2158]: Started polkitd version 126 May 27 03:23:06.015124 polkitd[2158]: Loading rules from directory /etc/polkit-1/rules.d May 27 03:23:06.017366 polkitd[2158]: Loading rules from directory /run/polkit-1/rules.d May 27 03:23:06.018184 polkitd[2158]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:23:06.018786 polkitd[2158]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 03:23:06.020021 polkitd[2158]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:23:06.020083 polkitd[2158]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 03:23:06.022849 polkitd[2158]: Finished loading, compiling and executing 2 rules May 27 03:23:06.023361 systemd[1]: Started polkit.service - Authorization Manager. May 27 03:23:06.027562 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 03:23:06.029132 polkitd[2158]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 03:23:06.036341 containerd[1990]: time="2025-05-27T03:23:06.036054930Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.758µs" May 27 03:23:06.036341 containerd[1990]: time="2025-05-27T03:23:06.036097662Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:23:06.036341 containerd[1990]: time="2025-05-27T03:23:06.036121094Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037215898Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037263158Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037300062Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037397178Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037417961Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037722538Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037739811Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037755458Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037767157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.037854345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:23:06.038627 containerd[1990]: time="2025-05-27T03:23:06.038081909Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:23:06.039049 containerd[1990]: time="2025-05-27T03:23:06.038113734Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:23:06.039049 containerd[1990]: time="2025-05-27T03:23:06.038127856Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:23:06.040398 containerd[1990]: time="2025-05-27T03:23:06.039975463Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:23:06.041809 containerd[1990]: time="2025-05-27T03:23:06.041784301Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:23:06.042126 containerd[1990]: time="2025-05-27T03:23:06.041958327Z" level=info msg="metadata content store policy set" policy=shared May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050216518Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050294391Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050313964Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050343085Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050363531Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050379860Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050399071Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050416688Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050442144Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050456715Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050475359Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050498065Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050643037Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:23:06.051498 containerd[1990]: time="2025-05-27T03:23:06.050666384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050686112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050702213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050718079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050735633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050752579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050766724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050783108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050798278Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050814770Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050893973Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050912808Z" level=info msg="Start snapshots syncer" May 27 03:23:06.052094 containerd[1990]: time="2025-05-27T03:23:06.050948768Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:23:06.059182 containerd[1990]: time="2025-05-27T03:23:06.058776394Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:23:06.059182 containerd[1990]: time="2025-05-27T03:23:06.058892593Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059028083Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059197012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059236833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059258095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059275555Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059302512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059413765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:23:06.059479 containerd[1990]: time="2025-05-27T03:23:06.059438081Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:23:06.060759 systemd-resolved[1807]: System hostname changed to 'ip-172-31-28-64'. May 27 03:23:06.061177 systemd-hostnamed[2077]: Hostname set to (transient) May 27 03:23:06.061544 containerd[1990]: time="2025-05-27T03:23:06.061417425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:23:06.061544 containerd[1990]: time="2025-05-27T03:23:06.061465277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:23:06.061544 containerd[1990]: time="2025-05-27T03:23:06.061491909Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:23:06.061673 containerd[1990]: time="2025-05-27T03:23:06.061553519Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:23:06.061673 containerd[1990]: time="2025-05-27T03:23:06.061584931Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:23:06.061673 containerd[1990]: time="2025-05-27T03:23:06.061604318Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:23:06.061673 containerd[1990]: time="2025-05-27T03:23:06.061621518Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:23:06.061673 containerd[1990]: time="2025-05-27T03:23:06.061639759Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:23:06.061673 containerd[1990]: time="2025-05-27T03:23:06.061660926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:23:06.061880 containerd[1990]: time="2025-05-27T03:23:06.061692790Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:23:06.061880 containerd[1990]: time="2025-05-27T03:23:06.061720624Z" level=info msg="runtime interface created" May 27 03:23:06.061880 containerd[1990]: time="2025-05-27T03:23:06.061729223Z" level=info msg="created NRI interface" May 27 03:23:06.061880 containerd[1990]: time="2025-05-27T03:23:06.061749208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:23:06.061880 containerd[1990]: time="2025-05-27T03:23:06.061770841Z" level=info msg="Connect containerd service" May 27 03:23:06.061880 containerd[1990]: time="2025-05-27T03:23:06.061825571Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:23:06.063242 containerd[1990]: time="2025-05-27T03:23:06.062905778Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:23:06.155248 sshd_keygen[2018]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:23:06.212377 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:23:06.219702 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:23:06.248309 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:23:06.248672 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:23:06.252800 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:23:06.282410 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:23:06.284235 ntpd[1971]: bind(24) AF_INET6 fe80::466:50ff:fec6:75a9%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:23:06.284606 ntpd[1971]: 27 May 03:23:06 ntpd[1971]: bind(24) AF_INET6 fe80::466:50ff:fec6:75a9%2#123 flags 0x11 failed: Cannot assign requested address May 27 03:23:06.284672 ntpd[1971]: unable to create socket on eth0 (6) for fe80::466:50ff:fec6:75a9%2#123 May 27 03:23:06.285649 ntpd[1971]: 27 May 03:23:06 ntpd[1971]: unable to create socket on eth0 (6) for fe80::466:50ff:fec6:75a9%2#123 May 27 03:23:06.285649 ntpd[1971]: 27 May 03:23:06 ntpd[1971]: failed to init interface for address fe80::466:50ff:fec6:75a9%2 May 27 03:23:06.285505 ntpd[1971]: failed to init interface for address fe80::466:50ff:fec6:75a9%2 May 27 03:23:06.287669 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:23:06.291520 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:23:06.292560 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:23:06.344242 systemd-networkd[1806]: eth0: Gained IPv6LL May 27 03:23:06.359463 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:23:06.360968 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:23:06.366605 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 27 03:23:06.369185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:06.372037 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:23:06.436907 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:23:06.493344 containerd[1990]: time="2025-05-27T03:23:06.492999340Z" level=info msg="Start subscribing containerd event" May 27 03:23:06.494507 containerd[1990]: time="2025-05-27T03:23:06.494453904Z" level=info msg="Start recovering state" May 27 03:23:06.494666 containerd[1990]: time="2025-05-27T03:23:06.494642562Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494720626Z" level=info msg="Start event monitor" May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494785254Z" level=info msg="Start cni network conf syncer for default" May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494802491Z" level=info msg="Start streaming server" May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494823063Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494848619Z" level=info msg="runtime interface starting up..." May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494856973Z" level=info msg="starting plugins..." May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494875192Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:23:06.496472 containerd[1990]: time="2025-05-27T03:23:06.494972431Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:23:06.496760 containerd[1990]: time="2025-05-27T03:23:06.496594731Z" level=info msg="containerd successfully booted in 0.520329s" May 27 03:23:06.496747 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:23:06.515573 amazon-ssm-agent[2196]: Initializing new seelog logger May 27 03:23:06.516133 amazon-ssm-agent[2196]: New Seelog Logger Creation Complete May 27 03:23:06.516284 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.516369 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.517377 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 processing appconfig overrides May 27 03:23:06.517898 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.517975 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.518121 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 processing appconfig overrides May 27 03:23:06.518497 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.518563 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.518696 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 processing appconfig overrides May 27 03:23:06.519173 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5178 INFO Proxy environment variables: May 27 03:23:06.522349 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.522349 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:06.522349 amazon-ssm-agent[2196]: 2025/05/27 03:23:06 processing appconfig overrides May 27 03:23:06.619158 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5178 INFO https_proxy: May 27 03:23:06.656625 tar[1985]: linux-amd64/README.md May 27 03:23:06.679510 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:23:06.719676 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5178 INFO http_proxy: May 27 03:23:06.818885 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5178 INFO no_proxy: May 27 03:23:06.917117 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5181 INFO Checking if agent identity type OnPrem can be assumed May 27 03:23:07.016358 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5184 INFO Checking if agent identity type EC2 can be assumed May 27 03:23:07.115666 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5881 INFO Agent will take identity from EC2 May 27 03:23:07.214888 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5895 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 May 27 03:23:07.302787 amazon-ssm-agent[2196]: 2025/05/27 03:23:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:07.302896 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 03:23:07.303017 amazon-ssm-agent[2196]: 2025/05/27 03:23:07 processing appconfig overrides May 27 03:23:07.314217 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5895 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 27 03:23:07.327733 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5895 INFO [amazon-ssm-agent] Starting Core Agent May 27 03:23:07.327733 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5896 INFO [amazon-ssm-agent] Registrar detected. Attempting registration May 27 03:23:07.327733 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5896 INFO [Registrar] Starting registrar module May 27 03:23:07.327733 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5909 INFO [EC2Identity] Checking disk for registration info May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5910 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:06.5910 INFO [EC2Identity] Generating registration keypair May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.2614 INFO [EC2Identity] Checking write access before registering May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.2618 INFO [EC2Identity] Registering EC2 instance with Systems Manager May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.3026 INFO [EC2Identity] EC2 registration was successful. May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.3026 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.3027 INFO [CredentialRefresher] credentialRefresher has started May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.3027 INFO [CredentialRefresher] Starting credentials refresher loop May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.3274 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 27 03:23:07.327904 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.3276 INFO [CredentialRefresher] Credentials ready May 27 03:23:07.413400 amazon-ssm-agent[2196]: 2025-05-27 03:23:07.3278 INFO [CredentialRefresher] Next credential rotation will be in 29.999993834333335 minutes May 27 03:23:08.195726 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:23:08.198152 systemd[1]: Started sshd@0-172.31.28.64:22-139.178.68.195:48296.service - OpenSSH per-connection server daemon (139.178.68.195:48296). May 27 03:23:08.339583 amazon-ssm-agent[2196]: 2025-05-27 03:23:08.3392 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 27 03:23:08.397563 sshd[2223]: Accepted publickey for core from 139.178.68.195 port 48296 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:23:08.399784 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:08.411719 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:23:08.414054 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:23:08.427997 systemd-logind[1977]: New session 1 of user core. May 27 03:23:08.437632 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:23:08.441532 amazon-ssm-agent[2196]: 2025-05-27 03:23:08.3408 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2227) started May 27 03:23:08.442677 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:23:08.456818 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:23:08.459858 systemd-logind[1977]: New session c1 of user core. May 27 03:23:08.542349 amazon-ssm-agent[2196]: 2025-05-27 03:23:08.3408 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 27 03:23:08.602449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:08.603284 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:23:08.615036 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:08.618260 systemd[2235]: Queued start job for default target default.target. May 27 03:23:08.619351 systemd[2235]: Created slice app.slice - User Application Slice. May 27 03:23:08.619378 systemd[2235]: Reached target paths.target - Paths. May 27 03:23:08.619417 systemd[2235]: Reached target timers.target - Timers. May 27 03:23:08.620550 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:23:08.638587 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:23:08.638822 systemd[2235]: Reached target sockets.target - Sockets. May 27 03:23:08.638931 systemd[2235]: Reached target basic.target - Basic System. May 27 03:23:08.639016 systemd[2235]: Reached target default.target - Main User Target. May 27 03:23:08.639089 systemd[2235]: Startup finished in 171ms. May 27 03:23:08.639097 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:23:08.645487 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:23:08.646136 systemd[1]: Startup finished in 2.731s (kernel) + 6.871s (initrd) + 7.161s (userspace) = 16.763s. May 27 03:23:08.792105 systemd[1]: Started sshd@1-172.31.28.64:22-139.178.68.195:48308.service - OpenSSH per-connection server daemon (139.178.68.195:48308). May 27 03:23:08.959392 sshd[2262]: Accepted publickey for core from 139.178.68.195 port 48308 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:23:08.960679 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:08.966878 systemd-logind[1977]: New session 2 of user core. May 27 03:23:08.972559 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:23:09.090570 sshd[2268]: Connection closed by 139.178.68.195 port 48308 May 27 03:23:09.092430 sshd-session[2262]: pam_unix(sshd:session): session closed for user core May 27 03:23:09.096831 systemd[1]: sshd@1-172.31.28.64:22-139.178.68.195:48308.service: Deactivated successfully. May 27 03:23:09.099927 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:23:09.101246 systemd-logind[1977]: Session 2 logged out. Waiting for processes to exit. May 27 03:23:09.103586 systemd-logind[1977]: Removed session 2. May 27 03:23:09.122340 systemd[1]: Started sshd@2-172.31.28.64:22-139.178.68.195:48310.service - OpenSSH per-connection server daemon (139.178.68.195:48310). May 27 03:23:09.284202 ntpd[1971]: Listen normally on 7 eth0 [fe80::466:50ff:fec6:75a9%2]:123 May 27 03:23:09.284625 ntpd[1971]: 27 May 03:23:09 ntpd[1971]: Listen normally on 7 eth0 [fe80::466:50ff:fec6:75a9%2]:123 May 27 03:23:09.286829 sshd[2274]: Accepted publickey for core from 139.178.68.195 port 48310 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:23:09.288429 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:09.293176 systemd-logind[1977]: New session 3 of user core. May 27 03:23:09.302533 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:23:09.415888 sshd[2276]: Connection closed by 139.178.68.195 port 48310 May 27 03:23:09.416504 sshd-session[2274]: pam_unix(sshd:session): session closed for user core May 27 03:23:09.420592 systemd-logind[1977]: Session 3 logged out. Waiting for processes to exit. May 27 03:23:09.421029 systemd[1]: sshd@2-172.31.28.64:22-139.178.68.195:48310.service: Deactivated successfully. May 27 03:23:09.423151 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:23:09.424963 systemd-logind[1977]: Removed session 3. May 27 03:23:09.448244 systemd[1]: Started sshd@3-172.31.28.64:22-139.178.68.195:48318.service - OpenSSH per-connection server daemon (139.178.68.195:48318). May 27 03:23:09.624013 sshd[2283]: Accepted publickey for core from 139.178.68.195 port 48318 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:23:09.625403 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:09.630379 systemd-logind[1977]: New session 4 of user core. May 27 03:23:09.635533 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:23:09.750179 kubelet[2252]: E0527 03:23:09.749770 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:09.752383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:09.752529 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:09.752898 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 267.3M memory peak. May 27 03:23:09.761784 sshd[2285]: Connection closed by 139.178.68.195 port 48318 May 27 03:23:09.762308 sshd-session[2283]: pam_unix(sshd:session): session closed for user core May 27 03:23:09.765293 systemd[1]: sshd@3-172.31.28.64:22-139.178.68.195:48318.service: Deactivated successfully. May 27 03:23:09.767140 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:23:09.769585 systemd-logind[1977]: Session 4 logged out. Waiting for processes to exit. May 27 03:23:09.770754 systemd-logind[1977]: Removed session 4. May 27 03:23:09.796998 systemd[1]: Started sshd@4-172.31.28.64:22-139.178.68.195:48328.service - OpenSSH per-connection server daemon (139.178.68.195:48328). May 27 03:23:09.978864 sshd[2292]: Accepted publickey for core from 139.178.68.195 port 48328 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:23:09.980154 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:09.985352 systemd-logind[1977]: New session 5 of user core. May 27 03:23:09.992526 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:23:10.126562 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:23:10.126836 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:10.142887 sudo[2295]: pam_unix(sudo:session): session closed for user root May 27 03:23:10.165307 sshd[2294]: Connection closed by 139.178.68.195 port 48328 May 27 03:23:10.166069 sshd-session[2292]: pam_unix(sshd:session): session closed for user core May 27 03:23:10.170080 systemd[1]: sshd@4-172.31.28.64:22-139.178.68.195:48328.service: Deactivated successfully. May 27 03:23:10.171632 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:23:10.172277 systemd-logind[1977]: Session 5 logged out. Waiting for processes to exit. May 27 03:23:10.173790 systemd-logind[1977]: Removed session 5. May 27 03:23:10.205293 systemd[1]: Started sshd@5-172.31.28.64:22-139.178.68.195:48344.service - OpenSSH per-connection server daemon (139.178.68.195:48344). May 27 03:23:10.374807 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 48344 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:23:10.376263 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:10.381247 systemd-logind[1977]: New session 6 of user core. May 27 03:23:10.390535 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:23:10.487162 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:23:10.487459 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:10.493557 sudo[2305]: pam_unix(sudo:session): session closed for user root May 27 03:23:10.498863 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:23:10.499120 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:10.508546 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:23:10.552911 augenrules[2327]: No rules May 27 03:23:10.554112 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:23:10.554490 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:23:10.555375 sudo[2304]: pam_unix(sudo:session): session closed for user root May 27 03:23:10.577722 sshd[2303]: Connection closed by 139.178.68.195 port 48344 May 27 03:23:10.578208 sshd-session[2301]: pam_unix(sshd:session): session closed for user core May 27 03:23:10.582291 systemd[1]: sshd@5-172.31.28.64:22-139.178.68.195:48344.service: Deactivated successfully. May 27 03:23:10.583961 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:23:10.584640 systemd-logind[1977]: Session 6 logged out. Waiting for processes to exit. May 27 03:23:10.585842 systemd-logind[1977]: Removed session 6. May 27 03:23:10.613160 systemd[1]: Started sshd@6-172.31.28.64:22-139.178.68.195:48354.service - OpenSSH per-connection server daemon (139.178.68.195:48354). May 27 03:23:10.781916 sshd[2336]: Accepted publickey for core from 139.178.68.195 port 48354 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:23:10.782866 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:23:10.787376 systemd-logind[1977]: New session 7 of user core. May 27 03:23:10.794479 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:23:10.892652 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:23:10.892915 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:23:11.524818 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:23:11.534725 (dockerd)[2357]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:23:11.976402 dockerd[2357]: time="2025-05-27T03:23:11.976266034Z" level=info msg="Starting up" May 27 03:23:11.979432 dockerd[2357]: time="2025-05-27T03:23:11.979393255Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:23:12.011214 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2634555279-merged.mount: Deactivated successfully. May 27 03:23:12.115543 dockerd[2357]: time="2025-05-27T03:23:12.115500493Z" level=info msg="Loading containers: start." May 27 03:23:12.140400 kernel: Initializing XFRM netlink socket May 27 03:23:12.957457 systemd-resolved[1807]: Clock change detected. Flushing caches. May 27 03:23:13.091283 (udev-worker)[2379]: Network interface NamePolicy= disabled on kernel command line. May 27 03:23:13.135610 systemd-networkd[1806]: docker0: Link UP May 27 03:23:13.142752 dockerd[2357]: time="2025-05-27T03:23:13.142706325Z" level=info msg="Loading containers: done." May 27 03:23:13.163677 dockerd[2357]: time="2025-05-27T03:23:13.163549425Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:23:13.163677 dockerd[2357]: time="2025-05-27T03:23:13.163643742Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:23:13.164005 dockerd[2357]: time="2025-05-27T03:23:13.163975797Z" level=info msg="Initializing buildkit" May 27 03:23:13.193539 dockerd[2357]: time="2025-05-27T03:23:13.193474173Z" level=info msg="Completed buildkit initialization" May 27 03:23:13.200768 dockerd[2357]: time="2025-05-27T03:23:13.200721385Z" level=info msg="Daemon has completed initialization" May 27 03:23:13.201571 dockerd[2357]: time="2025-05-27T03:23:13.200893573Z" level=info msg="API listen on /run/docker.sock" May 27 03:23:13.200969 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:23:14.272099 containerd[1990]: time="2025-05-27T03:23:14.272058288Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 03:23:14.885759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317100263.mount: Deactivated successfully. May 27 03:23:16.202922 containerd[1990]: time="2025-05-27T03:23:16.202791710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:16.204058 containerd[1990]: time="2025-05-27T03:23:16.203888884Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 03:23:16.205881 containerd[1990]: time="2025-05-27T03:23:16.205853213Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:16.208566 containerd[1990]: time="2025-05-27T03:23:16.208537047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:16.209366 containerd[1990]: time="2025-05-27T03:23:16.209340304Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.937247348s" May 27 03:23:16.209607 containerd[1990]: time="2025-05-27T03:23:16.209455669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 03:23:16.210093 containerd[1990]: time="2025-05-27T03:23:16.210067451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 03:23:17.871048 containerd[1990]: time="2025-05-27T03:23:17.870994300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:17.872081 containerd[1990]: time="2025-05-27T03:23:17.872037757Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 03:23:17.874267 containerd[1990]: time="2025-05-27T03:23:17.874218337Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:17.877529 containerd[1990]: time="2025-05-27T03:23:17.876702650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:17.877529 containerd[1990]: time="2025-05-27T03:23:17.877398389Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.667305079s" May 27 03:23:17.877529 containerd[1990]: time="2025-05-27T03:23:17.877427559Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 03:23:17.878000 containerd[1990]: time="2025-05-27T03:23:17.877983228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 03:23:19.184997 containerd[1990]: time="2025-05-27T03:23:19.184946946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:19.186198 containerd[1990]: time="2025-05-27T03:23:19.186029264Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 03:23:19.188561 containerd[1990]: time="2025-05-27T03:23:19.188528816Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:19.192059 containerd[1990]: time="2025-05-27T03:23:19.192025635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:19.193116 containerd[1990]: time="2025-05-27T03:23:19.192931595Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.314790909s" May 27 03:23:19.193116 containerd[1990]: time="2025-05-27T03:23:19.192967859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 03:23:19.193843 containerd[1990]: time="2025-05-27T03:23:19.193621193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 03:23:20.181889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040239890.mount: Deactivated successfully. May 27 03:23:20.577893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:23:20.580761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:20.742633 containerd[1990]: time="2025-05-27T03:23:20.742574584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:20.745175 containerd[1990]: time="2025-05-27T03:23:20.745116962Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 03:23:20.746429 containerd[1990]: time="2025-05-27T03:23:20.746394504Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:20.752175 containerd[1990]: time="2025-05-27T03:23:20.752125812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:20.752674 containerd[1990]: time="2025-05-27T03:23:20.752644016Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.558965368s" May 27 03:23:20.752730 containerd[1990]: time="2025-05-27T03:23:20.752678761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 03:23:20.753968 containerd[1990]: time="2025-05-27T03:23:20.753909827Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 03:23:20.868587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:20.875846 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:20.917625 kubelet[2638]: E0527 03:23:20.917568 2638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:20.921814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:20.922139 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:20.922635 systemd[1]: kubelet.service: Consumed 167ms CPU time, 109.5M memory peak. May 27 03:23:21.312764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132249958.mount: Deactivated successfully. May 27 03:23:22.416978 containerd[1990]: time="2025-05-27T03:23:22.416921388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:22.417953 containerd[1990]: time="2025-05-27T03:23:22.417912040Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 03:23:22.420687 containerd[1990]: time="2025-05-27T03:23:22.420637441Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:22.423589 containerd[1990]: time="2025-05-27T03:23:22.423531510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:22.425095 containerd[1990]: time="2025-05-27T03:23:22.424530981Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.670587503s" May 27 03:23:22.425095 containerd[1990]: time="2025-05-27T03:23:22.424570944Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 03:23:22.425524 containerd[1990]: time="2025-05-27T03:23:22.425497488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:23:22.887959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520114584.mount: Deactivated successfully. May 27 03:23:22.900571 containerd[1990]: time="2025-05-27T03:23:22.900522951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:22.902563 containerd[1990]: time="2025-05-27T03:23:22.902527293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:23:22.904595 containerd[1990]: time="2025-05-27T03:23:22.904535589Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:22.908293 containerd[1990]: time="2025-05-27T03:23:22.907601889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:22.908293 containerd[1990]: time="2025-05-27T03:23:22.908150817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 482.624287ms" May 27 03:23:22.908293 containerd[1990]: time="2025-05-27T03:23:22.908177645Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:23:22.908757 containerd[1990]: time="2025-05-27T03:23:22.908720529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:23:25.476995 containerd[1990]: time="2025-05-27T03:23:25.476868898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:25.478004 containerd[1990]: time="2025-05-27T03:23:25.477965196Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 03:23:25.480386 containerd[1990]: time="2025-05-27T03:23:25.480334438Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:25.483803 containerd[1990]: time="2025-05-27T03:23:25.483757135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:25.485196 containerd[1990]: time="2025-05-27T03:23:25.484586862Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.575686748s" May 27 03:23:25.485196 containerd[1990]: time="2025-05-27T03:23:25.484620125Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:23:28.379356 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:28.379626 systemd[1]: kubelet.service: Consumed 167ms CPU time, 109.5M memory peak. May 27 03:23:28.382700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:28.418471 systemd[1]: Reload requested from client PID 2741 ('systemctl') (unit session-7.scope)... May 27 03:23:28.418535 systemd[1]: Reloading... May 27 03:23:28.543514 zram_generator::config[2782]: No configuration found. May 27 03:23:28.702109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:28.843161 systemd[1]: Reloading finished in 424 ms. May 27 03:23:28.904091 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:23:28.904187 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:23:28.904440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:28.904518 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.2M memory peak. May 27 03:23:28.906281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:29.128180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:29.137015 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:29.197282 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:29.197282 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:23:29.197282 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:29.200365 kubelet[2849]: I0527 03:23:29.200307 2849 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:29.868656 kubelet[2849]: I0527 03:23:29.868614 2849 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:23:29.868656 kubelet[2849]: I0527 03:23:29.868644 2849 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:29.868975 kubelet[2849]: I0527 03:23:29.868953 2849 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:23:29.910864 kubelet[2849]: I0527 03:23:29.910718 2849 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:29.920656 kubelet[2849]: E0527 03:23:29.920605 2849 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:23:29.942057 kubelet[2849]: I0527 03:23:29.941991 2849 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:29.956096 kubelet[2849]: I0527 03:23:29.956065 2849 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:29.958920 kubelet[2849]: I0527 03:23:29.958856 2849 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:29.962796 kubelet[2849]: I0527 03:23:29.958907 2849 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-64","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:29.965508 kubelet[2849]: I0527 03:23:29.965427 2849 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:29.965508 kubelet[2849]: I0527 03:23:29.965456 2849 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:23:29.965631 kubelet[2849]: I0527 03:23:29.965602 2849 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:29.968691 kubelet[2849]: I0527 03:23:29.968670 2849 kubelet.go:480] "Attempting to sync node with API server" May 27 03:23:29.968691 kubelet[2849]: I0527 03:23:29.968693 2849 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:29.968891 kubelet[2849]: I0527 03:23:29.968717 2849 kubelet.go:386] "Adding apiserver pod source" May 27 03:23:29.968891 kubelet[2849]: I0527 03:23:29.968729 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:29.984287 kubelet[2849]: E0527 03:23:29.984246 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:23:29.984411 kubelet[2849]: I0527 03:23:29.984351 2849 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:29.984813 kubelet[2849]: I0527 03:23:29.984795 2849 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:23:29.986246 kubelet[2849]: E0527 03:23:29.986187 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-64&limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:23:29.986425 kubelet[2849]: W0527 03:23:29.986341 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:23:29.991159 kubelet[2849]: I0527 03:23:29.991116 2849 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:23:29.991245 kubelet[2849]: I0527 03:23:29.991186 2849 server.go:1289] "Started kubelet" May 27 03:23:29.992965 kubelet[2849]: I0527 03:23:29.992914 2849 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:29.995145 kubelet[2849]: I0527 03:23:29.994615 2849 server.go:317] "Adding debug handlers to kubelet server" May 27 03:23:29.999658 kubelet[2849]: E0527 03:23:29.996543 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.64:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.64:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-64.184344547b946514 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-64,UID:ip-172-31-28-64,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-64,},FirstTimestamp:2025-05-27 03:23:29.99114882 +0000 UTC m=+0.848953162,LastTimestamp:2025-05-27 03:23:29.99114882 +0000 UTC m=+0.848953162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-64,}" May 27 03:23:29.999882 kubelet[2849]: I0527 03:23:29.999842 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:30.000162 kubelet[2849]: I0527 03:23:30.000146 2849 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:30.001497 kubelet[2849]: I0527 03:23:30.001463 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:30.003562 kubelet[2849]: I0527 03:23:30.003546 2849 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:30.008584 kubelet[2849]: E0527 03:23:30.008549 2849 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-64\" not found" May 27 03:23:30.008688 kubelet[2849]: I0527 03:23:30.008602 2849 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:23:30.009148 kubelet[2849]: I0527 03:23:30.008782 2849 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:23:30.009148 kubelet[2849]: I0527 03:23:30.008837 2849 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:30.009240 kubelet[2849]: E0527 03:23:30.009181 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:23:30.012542 kubelet[2849]: I0527 03:23:30.012440 2849 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:23:30.016036 kubelet[2849]: E0527 03:23:30.015991 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-64?timeout=10s\": dial tcp 172.31.28.64:6443: connect: connection refused" interval="200ms" May 27 03:23:30.023119 kubelet[2849]: E0527 03:23:30.023093 2849 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:30.023757 kubelet[2849]: I0527 03:23:30.023737 2849 factory.go:223] Registration of the containerd container factory successfully May 27 03:23:30.023757 kubelet[2849]: I0527 03:23:30.023751 2849 factory.go:223] Registration of the systemd container factory successfully May 27 03:23:30.023848 kubelet[2849]: I0527 03:23:30.023810 2849 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:30.052912 kubelet[2849]: I0527 03:23:30.052644 2849 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:23:30.052912 kubelet[2849]: I0527 03:23:30.052669 2849 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:23:30.052912 kubelet[2849]: I0527 03:23:30.052689 2849 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:30.053108 kubelet[2849]: I0527 03:23:30.053060 2849 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:23:30.053108 kubelet[2849]: I0527 03:23:30.053081 2849 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:23:30.053108 kubelet[2849]: I0527 03:23:30.053102 2849 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:23:30.053225 kubelet[2849]: I0527 03:23:30.053111 2849 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:23:30.053225 kubelet[2849]: E0527 03:23:30.053155 2849 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:23:30.056951 kubelet[2849]: E0527 03:23:30.056904 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:23:30.057688 kubelet[2849]: I0527 03:23:30.057659 2849 policy_none.go:49] "None policy: Start" May 27 03:23:30.057688 kubelet[2849]: I0527 03:23:30.057689 2849 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:23:30.057795 kubelet[2849]: I0527 03:23:30.057707 2849 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:30.066211 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:23:30.077108 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:23:30.080525 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:23:30.088504 kubelet[2849]: E0527 03:23:30.088428 2849 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:23:30.088773 kubelet[2849]: I0527 03:23:30.088761 2849 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:30.089083 kubelet[2849]: I0527 03:23:30.089044 2849 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:30.089364 kubelet[2849]: I0527 03:23:30.089353 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:30.091058 kubelet[2849]: E0527 03:23:30.091042 2849 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:23:30.091177 kubelet[2849]: E0527 03:23:30.091167 2849 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-64\" not found" May 27 03:23:30.167073 systemd[1]: Created slice kubepods-burstable-podcd170461a596363689aa00af0d4b1781.slice - libcontainer container kubepods-burstable-podcd170461a596363689aa00af0d4b1781.slice. May 27 03:23:30.174947 kubelet[2849]: E0527 03:23:30.174605 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:30.180715 systemd[1]: Created slice kubepods-burstable-pode152c63d08a9b163fc25774b940813d1.slice - libcontainer container kubepods-burstable-pode152c63d08a9b163fc25774b940813d1.slice. May 27 03:23:30.185851 kubelet[2849]: E0527 03:23:30.185824 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:30.189262 systemd[1]: Created slice kubepods-burstable-pod51df5490be63c8e5dce1d836c535027d.slice - libcontainer container kubepods-burstable-pod51df5490be63c8e5dce1d836c535027d.slice. May 27 03:23:30.191659 kubelet[2849]: I0527 03:23:30.191626 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-64" May 27 03:23:30.192206 kubelet[2849]: E0527 03:23:30.192114 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:30.192439 kubelet[2849]: E0527 03:23:30.192406 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.64:6443/api/v1/nodes\": dial tcp 172.31.28.64:6443: connect: connection refused" node="ip-172-31-28-64" May 27 03:23:30.217368 kubelet[2849]: E0527 03:23:30.217313 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-64?timeout=10s\": dial tcp 172.31.28.64:6443: connect: connection refused" interval="400ms" May 27 03:23:30.310907 kubelet[2849]: I0527 03:23:30.310718 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:30.310907 kubelet[2849]: I0527 03:23:30.310808 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:30.310907 kubelet[2849]: I0527 03:23:30.310854 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd170461a596363689aa00af0d4b1781-ca-certs\") pod \"kube-apiserver-ip-172-31-28-64\" (UID: \"cd170461a596363689aa00af0d4b1781\") " pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:30.310907 kubelet[2849]: I0527 03:23:30.310870 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd170461a596363689aa00af0d4b1781-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-64\" (UID: \"cd170461a596363689aa00af0d4b1781\") " pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:30.310907 kubelet[2849]: I0527 03:23:30.310887 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd170461a596363689aa00af0d4b1781-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-64\" (UID: \"cd170461a596363689aa00af0d4b1781\") " pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:30.311175 kubelet[2849]: I0527 03:23:30.310904 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51df5490be63c8e5dce1d836c535027d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-64\" (UID: \"51df5490be63c8e5dce1d836c535027d\") " pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:30.311175 kubelet[2849]: I0527 03:23:30.310920 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:30.311175 kubelet[2849]: I0527 03:23:30.310935 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:30.311175 kubelet[2849]: I0527 03:23:30.310950 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:30.394634 kubelet[2849]: I0527 03:23:30.394298 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-64" May 27 03:23:30.394634 kubelet[2849]: E0527 03:23:30.394589 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.64:6443/api/v1/nodes\": dial tcp 172.31.28.64:6443: connect: connection refused" node="ip-172-31-28-64" May 27 03:23:30.476143 containerd[1990]: time="2025-05-27T03:23:30.476017416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-64,Uid:cd170461a596363689aa00af0d4b1781,Namespace:kube-system,Attempt:0,}" May 27 03:23:30.493732 containerd[1990]: time="2025-05-27T03:23:30.493662373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-64,Uid:e152c63d08a9b163fc25774b940813d1,Namespace:kube-system,Attempt:0,}" May 27 03:23:30.494246 containerd[1990]: time="2025-05-27T03:23:30.494202621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-64,Uid:51df5490be63c8e5dce1d836c535027d,Namespace:kube-system,Attempt:0,}" May 27 03:23:30.620222 kubelet[2849]: E0527 03:23:30.620168 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-64?timeout=10s\": dial tcp 172.31.28.64:6443: connect: connection refused" interval="800ms" May 27 03:23:30.628189 containerd[1990]: time="2025-05-27T03:23:30.627530747Z" level=info msg="connecting to shim b59489e40ed4c09a0aba0d580aef2e22ffea718206fb61a0fc2323587761577b" address="unix:///run/containerd/s/298dbacc323a56a3c76ddade0cc1157fdcea5d248a24ab602a908abd67bb4bd1" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:30.630136 containerd[1990]: time="2025-05-27T03:23:30.629974108Z" level=info msg="connecting to shim 1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244" address="unix:///run/containerd/s/dd1a0b29785832deb6f0839fbc3ef0fa0124499b83452d5d72fb13a298ba7b6e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:30.634781 containerd[1990]: time="2025-05-27T03:23:30.634734595Z" level=info msg="connecting to shim 70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69" address="unix:///run/containerd/s/5f34323f859a7963fe496f0b3f77ce2da1028961e416f880c18286d27d089f69" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:30.761887 systemd[1]: Started cri-containerd-1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244.scope - libcontainer container 1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244. May 27 03:23:30.765705 systemd[1]: Started cri-containerd-70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69.scope - libcontainer container 70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69. May 27 03:23:30.768038 systemd[1]: Started cri-containerd-b59489e40ed4c09a0aba0d580aef2e22ffea718206fb61a0fc2323587761577b.scope - libcontainer container b59489e40ed4c09a0aba0d580aef2e22ffea718206fb61a0fc2323587761577b. May 27 03:23:30.800658 kubelet[2849]: I0527 03:23:30.800628 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-64" May 27 03:23:30.801699 kubelet[2849]: E0527 03:23:30.801670 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.64:6443/api/v1/nodes\": dial tcp 172.31.28.64:6443: connect: connection refused" node="ip-172-31-28-64" May 27 03:23:30.893817 containerd[1990]: time="2025-05-27T03:23:30.893762865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-64,Uid:e152c63d08a9b163fc25774b940813d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244\"" May 27 03:23:30.894932 containerd[1990]: time="2025-05-27T03:23:30.894883574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-64,Uid:cd170461a596363689aa00af0d4b1781,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59489e40ed4c09a0aba0d580aef2e22ffea718206fb61a0fc2323587761577b\"" May 27 03:23:30.905754 containerd[1990]: time="2025-05-27T03:23:30.905716264Z" level=info msg="CreateContainer within sandbox \"1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:23:30.907472 containerd[1990]: time="2025-05-27T03:23:30.907440043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-64,Uid:51df5490be63c8e5dce1d836c535027d,Namespace:kube-system,Attempt:0,} returns sandbox id \"70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69\"" May 27 03:23:30.910158 containerd[1990]: time="2025-05-27T03:23:30.910125689Z" level=info msg="CreateContainer within sandbox \"b59489e40ed4c09a0aba0d580aef2e22ffea718206fb61a0fc2323587761577b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:23:30.914520 containerd[1990]: time="2025-05-27T03:23:30.914470181Z" level=info msg="CreateContainer within sandbox \"70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:23:30.929816 containerd[1990]: time="2025-05-27T03:23:30.929778516Z" level=info msg="Container 492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:30.940521 containerd[1990]: time="2025-05-27T03:23:30.940472195Z" level=info msg="CreateContainer within sandbox \"1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a\"" May 27 03:23:30.941236 containerd[1990]: time="2025-05-27T03:23:30.941211380Z" level=info msg="StartContainer for \"492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a\"" May 27 03:23:30.943254 containerd[1990]: time="2025-05-27T03:23:30.943222640Z" level=info msg="connecting to shim 492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a" address="unix:///run/containerd/s/dd1a0b29785832deb6f0839fbc3ef0fa0124499b83452d5d72fb13a298ba7b6e" protocol=ttrpc version=3 May 27 03:23:30.945600 containerd[1990]: time="2025-05-27T03:23:30.945576683Z" level=info msg="Container 1b6a3af72c56c8443afb013bcf2a2ac65c1d0acb2b6ccad0114a55470f411217: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:30.948636 containerd[1990]: time="2025-05-27T03:23:30.948604218Z" level=info msg="Container a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:30.965718 systemd[1]: Started cri-containerd-492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a.scope - libcontainer container 492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a. May 27 03:23:30.966239 containerd[1990]: time="2025-05-27T03:23:30.965538058Z" level=info msg="CreateContainer within sandbox \"70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676\"" May 27 03:23:30.967238 containerd[1990]: time="2025-05-27T03:23:30.967217174Z" level=info msg="StartContainer for \"a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676\"" May 27 03:23:30.968255 containerd[1990]: time="2025-05-27T03:23:30.968229871Z" level=info msg="connecting to shim a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676" address="unix:///run/containerd/s/5f34323f859a7963fe496f0b3f77ce2da1028961e416f880c18286d27d089f69" protocol=ttrpc version=3 May 27 03:23:30.975336 containerd[1990]: time="2025-05-27T03:23:30.975297283Z" level=info msg="CreateContainer within sandbox \"b59489e40ed4c09a0aba0d580aef2e22ffea718206fb61a0fc2323587761577b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1b6a3af72c56c8443afb013bcf2a2ac65c1d0acb2b6ccad0114a55470f411217\"" May 27 03:23:30.975999 containerd[1990]: time="2025-05-27T03:23:30.975977107Z" level=info msg="StartContainer for \"1b6a3af72c56c8443afb013bcf2a2ac65c1d0acb2b6ccad0114a55470f411217\"" May 27 03:23:30.979646 containerd[1990]: time="2025-05-27T03:23:30.978795318Z" level=info msg="connecting to shim 1b6a3af72c56c8443afb013bcf2a2ac65c1d0acb2b6ccad0114a55470f411217" address="unix:///run/containerd/s/298dbacc323a56a3c76ddade0cc1157fdcea5d248a24ab602a908abd67bb4bd1" protocol=ttrpc version=3 May 27 03:23:30.993714 systemd[1]: Started cri-containerd-a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676.scope - libcontainer container a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676. May 27 03:23:31.002743 systemd[1]: Started cri-containerd-1b6a3af72c56c8443afb013bcf2a2ac65c1d0acb2b6ccad0114a55470f411217.scope - libcontainer container 1b6a3af72c56c8443afb013bcf2a2ac65c1d0acb2b6ccad0114a55470f411217. May 27 03:23:31.072727 kubelet[2849]: E0527 03:23:31.072617 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:23:31.095413 containerd[1990]: time="2025-05-27T03:23:31.095372290Z" level=info msg="StartContainer for \"a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676\" returns successfully" May 27 03:23:31.097495 containerd[1990]: time="2025-05-27T03:23:31.096455409Z" level=info msg="StartContainer for \"492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a\" returns successfully" May 27 03:23:31.099558 containerd[1990]: time="2025-05-27T03:23:31.099530486Z" level=info msg="StartContainer for \"1b6a3af72c56c8443afb013bcf2a2ac65c1d0acb2b6ccad0114a55470f411217\" returns successfully" May 27 03:23:31.105375 kubelet[2849]: E0527 03:23:31.105332 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-64&limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:23:31.207494 kubelet[2849]: E0527 03:23:31.207443 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:23:31.345247 kubelet[2849]: E0527 03:23:31.344578 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:23:31.420605 kubelet[2849]: E0527 03:23:31.420565 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-64?timeout=10s\": dial tcp 172.31.28.64:6443: connect: connection refused" interval="1.6s" May 27 03:23:31.604991 kubelet[2849]: I0527 03:23:31.604692 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-64" May 27 03:23:31.604991 kubelet[2849]: E0527 03:23:31.604956 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.64:6443/api/v1/nodes\": dial tcp 172.31.28.64:6443: connect: connection refused" node="ip-172-31-28-64" May 27 03:23:32.081102 kubelet[2849]: E0527 03:23:32.081076 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:32.086434 kubelet[2849]: E0527 03:23:32.086409 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:32.087779 kubelet[2849]: E0527 03:23:32.087759 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:33.091867 kubelet[2849]: E0527 03:23:33.091838 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:33.092295 kubelet[2849]: E0527 03:23:33.092249 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:33.094056 kubelet[2849]: E0527 03:23:33.094031 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:33.209462 kubelet[2849]: I0527 03:23:33.209422 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-64" May 27 03:23:33.735683 kubelet[2849]: E0527 03:23:33.735628 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-64\" not found" node="ip-172-31-28-64" May 27 03:23:33.841820 kubelet[2849]: I0527 03:23:33.841784 2849 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-64" May 27 03:23:33.909862 kubelet[2849]: I0527 03:23:33.909815 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:33.919075 kubelet[2849]: E0527 03:23:33.919031 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:33.919075 kubelet[2849]: I0527 03:23:33.919059 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:33.921067 kubelet[2849]: E0527 03:23:33.921028 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:33.921067 kubelet[2849]: I0527 03:23:33.921059 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:33.922976 kubelet[2849]: E0527 03:23:33.922939 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:33.978360 kubelet[2849]: I0527 03:23:33.978112 2849 apiserver.go:52] "Watching apiserver" May 27 03:23:34.009457 kubelet[2849]: I0527 03:23:34.009309 2849 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:23:34.092544 kubelet[2849]: I0527 03:23:34.091366 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:34.092544 kubelet[2849]: I0527 03:23:34.091473 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:34.094348 kubelet[2849]: E0527 03:23:34.094300 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:34.095169 kubelet[2849]: E0527 03:23:34.095038 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:34.243142 kubelet[2849]: I0527 03:23:34.243048 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:34.245503 kubelet[2849]: E0527 03:23:34.245440 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-64\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:35.611993 systemd[1]: Reload requested from client PID 3130 ('systemctl') (unit session-7.scope)... May 27 03:23:35.612011 systemd[1]: Reloading... May 27 03:23:35.753542 zram_generator::config[3187]: No configuration found. May 27 03:23:35.849829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:36.003165 systemd[1]: Reloading finished in 390 ms. May 27 03:23:36.032361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:36.054040 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:23:36.054363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:36.054459 systemd[1]: kubelet.service: Consumed 1.209s CPU time, 127.2M memory peak. May 27 03:23:36.056728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:36.385960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:36.396891 (kubelet)[3234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:36.455424 kubelet[3234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:36.455424 kubelet[3234]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:23:36.455424 kubelet[3234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:36.456723 kubelet[3234]: I0527 03:23:36.456609 3234 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:36.463956 kubelet[3234]: I0527 03:23:36.463929 3234 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:23:36.464696 kubelet[3234]: I0527 03:23:36.464080 3234 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:36.464696 kubelet[3234]: I0527 03:23:36.464312 3234 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:23:36.465738 kubelet[3234]: I0527 03:23:36.465720 3234 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 03:23:36.478081 kubelet[3234]: I0527 03:23:36.478038 3234 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:36.483100 kubelet[3234]: I0527 03:23:36.483063 3234 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:36.490122 kubelet[3234]: I0527 03:23:36.490088 3234 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:36.490391 kubelet[3234]: I0527 03:23:36.490348 3234 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:36.492012 kubelet[3234]: I0527 03:23:36.490384 3234 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-64","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:36.492202 kubelet[3234]: I0527 03:23:36.492020 3234 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:36.492202 kubelet[3234]: I0527 03:23:36.492037 3234 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:23:36.492202 kubelet[3234]: I0527 03:23:36.492100 3234 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:36.492327 kubelet[3234]: I0527 03:23:36.492280 3234 kubelet.go:480] "Attempting to sync node with API server" May 27 03:23:36.492327 kubelet[3234]: I0527 03:23:36.492294 3234 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:36.492327 kubelet[3234]: I0527 03:23:36.492324 3234 kubelet.go:386] "Adding apiserver pod source" May 27 03:23:36.492439 kubelet[3234]: I0527 03:23:36.492341 3234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:36.496057 kubelet[3234]: I0527 03:23:36.496009 3234 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:36.496746 kubelet[3234]: I0527 03:23:36.496727 3234 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:23:36.503767 kubelet[3234]: I0527 03:23:36.503748 3234 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:23:36.504220 kubelet[3234]: I0527 03:23:36.503882 3234 server.go:1289] "Started kubelet" May 27 03:23:36.506575 kubelet[3234]: I0527 03:23:36.506558 3234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:36.515709 kubelet[3234]: I0527 03:23:36.515238 3234 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:36.516453 kubelet[3234]: I0527 03:23:36.516406 3234 server.go:317] "Adding debug handlers to kubelet server" May 27 03:23:36.524476 kubelet[3234]: I0527 03:23:36.524409 3234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:36.525363 kubelet[3234]: I0527 03:23:36.525344 3234 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:36.526548 kubelet[3234]: I0527 03:23:36.526531 3234 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:23:36.526949 kubelet[3234]: E0527 03:23:36.526929 3234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-64\" not found" May 27 03:23:36.527650 kubelet[3234]: I0527 03:23:36.527623 3234 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:23:36.528281 kubelet[3234]: I0527 03:23:36.528178 3234 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:36.528451 kubelet[3234]: I0527 03:23:36.528413 3234 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:36.535478 kubelet[3234]: I0527 03:23:36.535449 3234 factory.go:223] Registration of the systemd container factory successfully May 27 03:23:36.535478 kubelet[3234]: I0527 03:23:36.535585 3234 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:36.542530 kubelet[3234]: E0527 03:23:36.541381 3234 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:36.542746 kubelet[3234]: I0527 03:23:36.542648 3234 factory.go:223] Registration of the containerd container factory successfully May 27 03:23:36.554690 kubelet[3234]: I0527 03:23:36.554657 3234 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:23:36.560795 kubelet[3234]: I0527 03:23:36.560762 3234 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:23:36.560795 kubelet[3234]: I0527 03:23:36.560795 3234 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:23:36.560970 kubelet[3234]: I0527 03:23:36.560819 3234 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:23:36.560970 kubelet[3234]: I0527 03:23:36.560826 3234 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:23:36.560970 kubelet[3234]: E0527 03:23:36.560886 3234 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.618935 3234 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.618958 3234 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.618981 3234 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.619146 3234 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.619159 3234 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.619183 3234 policy_none.go:49] "None policy: Start" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.619192 3234 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.619202 3234 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:36.619557 kubelet[3234]: I0527 03:23:36.619322 3234 state_mem.go:75] "Updated machine memory state" May 27 03:23:36.626961 kubelet[3234]: E0527 03:23:36.626924 3234 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:23:36.628053 kubelet[3234]: I0527 03:23:36.628028 3234 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:36.628401 kubelet[3234]: I0527 03:23:36.628353 3234 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:36.628852 kubelet[3234]: I0527 03:23:36.628837 3234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:36.634627 kubelet[3234]: E0527 03:23:36.633278 3234 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:23:36.661569 kubelet[3234]: I0527 03:23:36.661439 3234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:36.664980 kubelet[3234]: I0527 03:23:36.664717 3234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:36.668355 kubelet[3234]: I0527 03:23:36.668244 3234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:36.731750 kubelet[3234]: I0527 03:23:36.730906 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51df5490be63c8e5dce1d836c535027d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-64\" (UID: \"51df5490be63c8e5dce1d836c535027d\") " pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:36.731750 kubelet[3234]: I0527 03:23:36.730955 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd170461a596363689aa00af0d4b1781-ca-certs\") pod \"kube-apiserver-ip-172-31-28-64\" (UID: \"cd170461a596363689aa00af0d4b1781\") " pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:36.731750 kubelet[3234]: I0527 03:23:36.730984 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd170461a596363689aa00af0d4b1781-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-64\" (UID: \"cd170461a596363689aa00af0d4b1781\") " pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:36.731750 kubelet[3234]: I0527 03:23:36.731017 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:36.731750 kubelet[3234]: I0527 03:23:36.731044 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:36.731988 kubelet[3234]: I0527 03:23:36.731069 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:36.731988 kubelet[3234]: I0527 03:23:36.731093 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:36.731988 kubelet[3234]: I0527 03:23:36.731116 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd170461a596363689aa00af0d4b1781-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-64\" (UID: \"cd170461a596363689aa00af0d4b1781\") " pod="kube-system/kube-apiserver-ip-172-31-28-64" May 27 03:23:36.731988 kubelet[3234]: I0527 03:23:36.731172 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e152c63d08a9b163fc25774b940813d1-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-64\" (UID: \"e152c63d08a9b163fc25774b940813d1\") " pod="kube-system/kube-controller-manager-ip-172-31-28-64" May 27 03:23:36.747030 kubelet[3234]: I0527 03:23:36.746413 3234 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-64" May 27 03:23:36.754974 kubelet[3234]: I0527 03:23:36.754924 3234 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-64" May 27 03:23:36.755096 kubelet[3234]: I0527 03:23:36.755024 3234 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-64" May 27 03:23:36.771240 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 03:23:37.493604 kubelet[3234]: I0527 03:23:37.493573 3234 apiserver.go:52] "Watching apiserver" May 27 03:23:37.528092 kubelet[3234]: I0527 03:23:37.528043 3234 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:23:37.588803 kubelet[3234]: I0527 03:23:37.588773 3234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:37.598992 kubelet[3234]: E0527 03:23:37.598957 3234 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-64\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-64" May 27 03:23:37.633568 kubelet[3234]: I0527 03:23:37.633472 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-64" podStartSLOduration=1.633450409 podStartE2EDuration="1.633450409s" podCreationTimestamp="2025-05-27 03:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:37.622171464 +0000 UTC m=+1.217825306" watchObservedRunningTime="2025-05-27 03:23:37.633450409 +0000 UTC m=+1.229104251" May 27 03:23:37.648426 kubelet[3234]: I0527 03:23:37.648350 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-64" podStartSLOduration=1.648332376 podStartE2EDuration="1.648332376s" podCreationTimestamp="2025-05-27 03:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:37.634195192 +0000 UTC m=+1.229849036" watchObservedRunningTime="2025-05-27 03:23:37.648332376 +0000 UTC m=+1.243986215" May 27 03:23:37.648883 kubelet[3234]: I0527 03:23:37.648459 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-64" podStartSLOduration=1.648453851 podStartE2EDuration="1.648453851s" podCreationTimestamp="2025-05-27 03:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:37.647948971 +0000 UTC m=+1.243602813" watchObservedRunningTime="2025-05-27 03:23:37.648453851 +0000 UTC m=+1.244107693" May 27 03:23:40.991582 kubelet[3234]: I0527 03:23:40.991549 3234 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:23:40.992020 containerd[1990]: time="2025-05-27T03:23:40.991974772Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:23:40.992598 kubelet[3234]: I0527 03:23:40.992214 3234 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:23:42.020284 systemd[1]: Created slice kubepods-besteffort-podd4da5ac5_3a79_4774_b7c9_f05ae656fe2f.slice - libcontainer container kubepods-besteffort-podd4da5ac5_3a79_4774_b7c9_f05ae656fe2f.slice. May 27 03:23:42.060172 kubelet[3234]: I0527 03:23:42.060097 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz9rx\" (UniqueName: \"kubernetes.io/projected/d4da5ac5-3a79-4774-b7c9-f05ae656fe2f-kube-api-access-rz9rx\") pod \"kube-proxy-h68v9\" (UID: \"d4da5ac5-3a79-4774-b7c9-f05ae656fe2f\") " pod="kube-system/kube-proxy-h68v9" May 27 03:23:42.060172 kubelet[3234]: I0527 03:23:42.060141 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4da5ac5-3a79-4774-b7c9-f05ae656fe2f-kube-proxy\") pod \"kube-proxy-h68v9\" (UID: \"d4da5ac5-3a79-4774-b7c9-f05ae656fe2f\") " pod="kube-system/kube-proxy-h68v9" May 27 03:23:42.060172 kubelet[3234]: I0527 03:23:42.060161 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4da5ac5-3a79-4774-b7c9-f05ae656fe2f-xtables-lock\") pod \"kube-proxy-h68v9\" (UID: \"d4da5ac5-3a79-4774-b7c9-f05ae656fe2f\") " pod="kube-system/kube-proxy-h68v9" May 27 03:23:42.060172 kubelet[3234]: I0527 03:23:42.060176 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4da5ac5-3a79-4774-b7c9-f05ae656fe2f-lib-modules\") pod \"kube-proxy-h68v9\" (UID: \"d4da5ac5-3a79-4774-b7c9-f05ae656fe2f\") " pod="kube-system/kube-proxy-h68v9" May 27 03:23:42.134373 systemd[1]: Created slice kubepods-besteffort-pod36d06580_ea5a_49fe_b671_5f5935cc1bc0.slice - libcontainer container kubepods-besteffort-pod36d06580_ea5a_49fe_b671_5f5935cc1bc0.slice. May 27 03:23:42.161021 kubelet[3234]: I0527 03:23:42.160592 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36d06580-ea5a-49fe-b671-5f5935cc1bc0-var-lib-calico\") pod \"tigera-operator-844669ff44-t8tcg\" (UID: \"36d06580-ea5a-49fe-b671-5f5935cc1bc0\") " pod="tigera-operator/tigera-operator-844669ff44-t8tcg" May 27 03:23:42.161021 kubelet[3234]: I0527 03:23:42.160654 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndvj\" (UniqueName: \"kubernetes.io/projected/36d06580-ea5a-49fe-b671-5f5935cc1bc0-kube-api-access-tndvj\") pod \"tigera-operator-844669ff44-t8tcg\" (UID: \"36d06580-ea5a-49fe-b671-5f5935cc1bc0\") " pod="tigera-operator/tigera-operator-844669ff44-t8tcg" May 27 03:23:42.330143 containerd[1990]: time="2025-05-27T03:23:42.330100524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h68v9,Uid:d4da5ac5-3a79-4774-b7c9-f05ae656fe2f,Namespace:kube-system,Attempt:0,}" May 27 03:23:42.367542 containerd[1990]: time="2025-05-27T03:23:42.367473622Z" level=info msg="connecting to shim fa6b261256bae5085962ad8e2cfab7e1ae17ec9db1f7256c068f2638af50c03f" address="unix:///run/containerd/s/e3871603380640b6baa11b30972e7af811bc65fdd36ce7a9670366bf31bdd36b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:42.401705 systemd[1]: Started cri-containerd-fa6b261256bae5085962ad8e2cfab7e1ae17ec9db1f7256c068f2638af50c03f.scope - libcontainer container fa6b261256bae5085962ad8e2cfab7e1ae17ec9db1f7256c068f2638af50c03f. May 27 03:23:42.436862 containerd[1990]: time="2025-05-27T03:23:42.436818075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h68v9,Uid:d4da5ac5-3a79-4774-b7c9-f05ae656fe2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa6b261256bae5085962ad8e2cfab7e1ae17ec9db1f7256c068f2638af50c03f\"" May 27 03:23:42.440698 containerd[1990]: time="2025-05-27T03:23:42.440661929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-t8tcg,Uid:36d06580-ea5a-49fe-b671-5f5935cc1bc0,Namespace:tigera-operator,Attempt:0,}" May 27 03:23:42.444786 containerd[1990]: time="2025-05-27T03:23:42.444748367Z" level=info msg="CreateContainer within sandbox \"fa6b261256bae5085962ad8e2cfab7e1ae17ec9db1f7256c068f2638af50c03f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:23:42.469672 containerd[1990]: time="2025-05-27T03:23:42.469632787Z" level=info msg="Container 89d251d4a20ec88689ea0da2a0f8f57f228c52574b98211b14e190e8c4a9156a: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:42.483541 containerd[1990]: time="2025-05-27T03:23:42.482851502Z" level=info msg="connecting to shim ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36" address="unix:///run/containerd/s/c3c258b8f95a04e5808cac4df96d45bc074137ac1a1b05794276f183c14f2e5a" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:42.484639 containerd[1990]: time="2025-05-27T03:23:42.484609594Z" level=info msg="CreateContainer within sandbox \"fa6b261256bae5085962ad8e2cfab7e1ae17ec9db1f7256c068f2638af50c03f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89d251d4a20ec88689ea0da2a0f8f57f228c52574b98211b14e190e8c4a9156a\"" May 27 03:23:42.485336 containerd[1990]: time="2025-05-27T03:23:42.485309646Z" level=info msg="StartContainer for \"89d251d4a20ec88689ea0da2a0f8f57f228c52574b98211b14e190e8c4a9156a\"" May 27 03:23:42.488978 containerd[1990]: time="2025-05-27T03:23:42.488644429Z" level=info msg="connecting to shim 89d251d4a20ec88689ea0da2a0f8f57f228c52574b98211b14e190e8c4a9156a" address="unix:///run/containerd/s/e3871603380640b6baa11b30972e7af811bc65fdd36ce7a9670366bf31bdd36b" protocol=ttrpc version=3 May 27 03:23:42.518671 systemd[1]: Started cri-containerd-ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36.scope - libcontainer container ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36. May 27 03:23:42.523008 systemd[1]: Started cri-containerd-89d251d4a20ec88689ea0da2a0f8f57f228c52574b98211b14e190e8c4a9156a.scope - libcontainer container 89d251d4a20ec88689ea0da2a0f8f57f228c52574b98211b14e190e8c4a9156a. May 27 03:23:42.602935 containerd[1990]: time="2025-05-27T03:23:42.602794384Z" level=info msg="StartContainer for \"89d251d4a20ec88689ea0da2a0f8f57f228c52574b98211b14e190e8c4a9156a\" returns successfully" May 27 03:23:42.605950 containerd[1990]: time="2025-05-27T03:23:42.605913511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-t8tcg,Uid:36d06580-ea5a-49fe-b671-5f5935cc1bc0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36\"" May 27 03:23:42.608565 containerd[1990]: time="2025-05-27T03:23:42.608221083Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:23:43.178188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890673166.mount: Deactivated successfully. May 27 03:23:44.233354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734855930.mount: Deactivated successfully. May 27 03:23:44.896224 containerd[1990]: time="2025-05-27T03:23:44.896175163Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:44.898142 containerd[1990]: time="2025-05-27T03:23:44.898109505Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:23:44.900510 containerd[1990]: time="2025-05-27T03:23:44.900156346Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:44.903166 containerd[1990]: time="2025-05-27T03:23:44.903123259Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:44.903958 containerd[1990]: time="2025-05-27T03:23:44.903626218Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.295377738s" May 27 03:23:44.903958 containerd[1990]: time="2025-05-27T03:23:44.903655384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:23:44.909075 containerd[1990]: time="2025-05-27T03:23:44.909013753Z" level=info msg="CreateContainer within sandbox \"ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:23:44.929815 containerd[1990]: time="2025-05-27T03:23:44.929767243Z" level=info msg="Container 917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:44.933392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437770446.mount: Deactivated successfully. May 27 03:23:44.941875 containerd[1990]: time="2025-05-27T03:23:44.941838147Z" level=info msg="CreateContainer within sandbox \"ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\"" May 27 03:23:44.942464 containerd[1990]: time="2025-05-27T03:23:44.942428169Z" level=info msg="StartContainer for \"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\"" May 27 03:23:44.943667 containerd[1990]: time="2025-05-27T03:23:44.943514995Z" level=info msg="connecting to shim 917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200" address="unix:///run/containerd/s/c3c258b8f95a04e5808cac4df96d45bc074137ac1a1b05794276f183c14f2e5a" protocol=ttrpc version=3 May 27 03:23:44.962675 systemd[1]: Started cri-containerd-917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200.scope - libcontainer container 917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200. May 27 03:23:44.995916 containerd[1990]: time="2025-05-27T03:23:44.995881115Z" level=info msg="StartContainer for \"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\" returns successfully" May 27 03:23:45.617306 kubelet[3234]: I0527 03:23:45.617248 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h68v9" podStartSLOduration=4.617230234 podStartE2EDuration="4.617230234s" podCreationTimestamp="2025-05-27 03:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:43.613619612 +0000 UTC m=+7.209273455" watchObservedRunningTime="2025-05-27 03:23:45.617230234 +0000 UTC m=+9.212884075" May 27 03:23:45.617922 kubelet[3234]: I0527 03:23:45.617363 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-t8tcg" podStartSLOduration=1.320684156 podStartE2EDuration="3.617357521s" podCreationTimestamp="2025-05-27 03:23:42 +0000 UTC" firstStartedPulling="2025-05-27 03:23:42.607671598 +0000 UTC m=+6.203325430" lastFinishedPulling="2025-05-27 03:23:44.904344974 +0000 UTC m=+8.499998795" observedRunningTime="2025-05-27 03:23:45.61709046 +0000 UTC m=+9.212744302" watchObservedRunningTime="2025-05-27 03:23:45.617357521 +0000 UTC m=+9.213011362" May 27 03:23:51.562053 update_engine[1979]: I20250527 03:23:51.561532 1979 update_attempter.cc:509] Updating boot flags... May 27 03:23:51.576382 sudo[2339]: pam_unix(sudo:session): session closed for user root May 27 03:23:51.599579 sshd[2338]: Connection closed by 139.178.68.195 port 48354 May 27 03:23:51.601086 sshd-session[2336]: pam_unix(sshd:session): session closed for user core May 27 03:23:51.608540 systemd[1]: sshd@6-172.31.28.64:22-139.178.68.195:48354.service: Deactivated successfully. May 27 03:23:51.614448 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:23:51.617536 systemd[1]: session-7.scope: Consumed 5.065s CPU time, 152M memory peak. May 27 03:23:51.619567 systemd-logind[1977]: Session 7 logged out. Waiting for processes to exit. May 27 03:23:51.622104 systemd-logind[1977]: Removed session 7. May 27 03:23:55.794494 systemd[1]: Created slice kubepods-besteffort-pod9c118aa5_6cef_479c_b712_f84f5f820f4b.slice - libcontainer container kubepods-besteffort-pod9c118aa5_6cef_479c_b712_f84f5f820f4b.slice. May 27 03:23:55.874020 kubelet[3234]: I0527 03:23:55.873964 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfpx\" (UniqueName: \"kubernetes.io/projected/9c118aa5-6cef-479c-b712-f84f5f820f4b-kube-api-access-xnfpx\") pod \"calico-typha-696b77c977-wg4w5\" (UID: \"9c118aa5-6cef-479c-b712-f84f5f820f4b\") " pod="calico-system/calico-typha-696b77c977-wg4w5" May 27 03:23:55.874020 kubelet[3234]: I0527 03:23:55.874014 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c118aa5-6cef-479c-b712-f84f5f820f4b-tigera-ca-bundle\") pod \"calico-typha-696b77c977-wg4w5\" (UID: \"9c118aa5-6cef-479c-b712-f84f5f820f4b\") " pod="calico-system/calico-typha-696b77c977-wg4w5" May 27 03:23:55.874020 kubelet[3234]: I0527 03:23:55.874030 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c118aa5-6cef-479c-b712-f84f5f820f4b-typha-certs\") pod \"calico-typha-696b77c977-wg4w5\" (UID: \"9c118aa5-6cef-479c-b712-f84f5f820f4b\") " pod="calico-system/calico-typha-696b77c977-wg4w5" May 27 03:23:56.110530 systemd[1]: Created slice kubepods-besteffort-pod14a74d51_bb07_46e0_9602_f241de6f59dd.slice - libcontainer container kubepods-besteffort-pod14a74d51_bb07_46e0_9602_f241de6f59dd.slice. May 27 03:23:56.117057 containerd[1990]: time="2025-05-27T03:23:56.116727376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-696b77c977-wg4w5,Uid:9c118aa5-6cef-479c-b712-f84f5f820f4b,Namespace:calico-system,Attempt:0,}" May 27 03:23:56.171832 containerd[1990]: time="2025-05-27T03:23:56.171776165Z" level=info msg="connecting to shim 51d6e9f53e9de45c35c2ca7be5856e698a90f9aaec692b5fbad87c6f0b73332a" address="unix:///run/containerd/s/6b23440e3c9f905360a9659bb1adf44f7755c5405bc44e69abdb5e1d3200cc74" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:56.176001 kubelet[3234]: I0527 03:23:56.175954 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-xtables-lock\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176160 kubelet[3234]: I0527 03:23:56.176013 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-var-run-calico\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176160 kubelet[3234]: I0527 03:23:56.176040 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/14a74d51-bb07-46e0-9602-f241de6f59dd-node-certs\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176160 kubelet[3234]: I0527 03:23:56.176061 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-cni-net-dir\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176160 kubelet[3234]: I0527 03:23:56.176083 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-lib-modules\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176160 kubelet[3234]: I0527 03:23:56.176101 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-policysync\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176377 kubelet[3234]: I0527 03:23:56.176121 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-cni-log-dir\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176377 kubelet[3234]: I0527 03:23:56.176141 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-flexvol-driver-host\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176377 kubelet[3234]: I0527 03:23:56.176164 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-cni-bin-dir\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176377 kubelet[3234]: I0527 03:23:56.176188 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14a74d51-bb07-46e0-9602-f241de6f59dd-tigera-ca-bundle\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.176377 kubelet[3234]: I0527 03:23:56.176211 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/14a74d51-bb07-46e0-9602-f241de6f59dd-var-lib-calico\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.177103 kubelet[3234]: I0527 03:23:56.176233 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjxlc\" (UniqueName: \"kubernetes.io/projected/14a74d51-bb07-46e0-9602-f241de6f59dd-kube-api-access-bjxlc\") pod \"calico-node-z6qpw\" (UID: \"14a74d51-bb07-46e0-9602-f241de6f59dd\") " pod="calico-system/calico-node-z6qpw" May 27 03:23:56.214874 systemd[1]: Started cri-containerd-51d6e9f53e9de45c35c2ca7be5856e698a90f9aaec692b5fbad87c6f0b73332a.scope - libcontainer container 51d6e9f53e9de45c35c2ca7be5856e698a90f9aaec692b5fbad87c6f0b73332a. May 27 03:23:56.285921 kubelet[3234]: E0527 03:23:56.285778 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.285921 kubelet[3234]: W0527 03:23:56.285803 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.291838 kubelet[3234]: E0527 03:23:56.291803 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.296526 kubelet[3234]: E0527 03:23:56.295948 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.296526 kubelet[3234]: W0527 03:23:56.295972 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.296526 kubelet[3234]: E0527 03:23:56.295996 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.301918 containerd[1990]: time="2025-05-27T03:23:56.301100804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-696b77c977-wg4w5,Uid:9c118aa5-6cef-479c-b712-f84f5f820f4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"51d6e9f53e9de45c35c2ca7be5856e698a90f9aaec692b5fbad87c6f0b73332a\"" May 27 03:23:56.303751 kubelet[3234]: E0527 03:23:56.303732 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.304025 kubelet[3234]: W0527 03:23:56.304004 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.304129 kubelet[3234]: E0527 03:23:56.304055 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.306343 containerd[1990]: time="2025-05-27T03:23:56.306142638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:23:56.397355 kubelet[3234]: E0527 03:23:56.396713 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-27b29" podUID="5b0dc86a-7a64-45f7-952b-6e3978d12edf" May 27 03:23:56.418609 containerd[1990]: time="2025-05-27T03:23:56.418476968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z6qpw,Uid:14a74d51-bb07-46e0-9602-f241de6f59dd,Namespace:calico-system,Attempt:0,}" May 27 03:23:56.458110 kubelet[3234]: E0527 03:23:56.458020 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.458110 kubelet[3234]: W0527 03:23:56.458051 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.458634 kubelet[3234]: E0527 03:23:56.458183 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.458991 kubelet[3234]: E0527 03:23:56.458908 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.458991 kubelet[3234]: W0527 03:23:56.458923 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.458991 kubelet[3234]: E0527 03:23:56.458942 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.459668 kubelet[3234]: E0527 03:23:56.459562 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.459668 kubelet[3234]: W0527 03:23:56.459582 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.459668 kubelet[3234]: E0527 03:23:56.459597 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.460095 containerd[1990]: time="2025-05-27T03:23:56.459880274Z" level=info msg="connecting to shim 85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96" address="unix:///run/containerd/s/70c6558698ef570a8ef2f4613d6ff3ff01da250d7693bed61c8d832a7b19f5c9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:56.460556 kubelet[3234]: E0527 03:23:56.460473 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.460556 kubelet[3234]: W0527 03:23:56.460506 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.460556 kubelet[3234]: E0527 03:23:56.460523 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.461126 kubelet[3234]: E0527 03:23:56.461078 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.461126 kubelet[3234]: W0527 03:23:56.461094 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.461126 kubelet[3234]: E0527 03:23:56.461108 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.461668 kubelet[3234]: E0527 03:23:56.461649 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.461668 kubelet[3234]: W0527 03:23:56.461666 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.461928 kubelet[3234]: E0527 03:23:56.461679 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.462380 kubelet[3234]: E0527 03:23:56.462364 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.462700 kubelet[3234]: W0527 03:23:56.462601 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.462700 kubelet[3234]: E0527 03:23:56.462622 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.463904 kubelet[3234]: E0527 03:23:56.463889 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.464110 kubelet[3234]: W0527 03:23:56.463979 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.464110 kubelet[3234]: E0527 03:23:56.463997 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.464478 kubelet[3234]: E0527 03:23:56.464364 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.464478 kubelet[3234]: W0527 03:23:56.464380 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.464478 kubelet[3234]: E0527 03:23:56.464393 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.464791 kubelet[3234]: E0527 03:23:56.464777 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.465014 kubelet[3234]: W0527 03:23:56.464930 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.465014 kubelet[3234]: E0527 03:23:56.464954 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.465708 kubelet[3234]: E0527 03:23:56.465628 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.465708 kubelet[3234]: W0527 03:23:56.465644 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.465708 kubelet[3234]: E0527 03:23:56.465658 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.466127 kubelet[3234]: E0527 03:23:56.466114 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.466370 kubelet[3234]: W0527 03:23:56.466248 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.466370 kubelet[3234]: E0527 03:23:56.466265 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.467059 kubelet[3234]: E0527 03:23:56.466996 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.467059 kubelet[3234]: W0527 03:23:56.467011 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.467059 kubelet[3234]: E0527 03:23:56.467024 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.468869 kubelet[3234]: E0527 03:23:56.468708 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.468869 kubelet[3234]: W0527 03:23:56.468723 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.468869 kubelet[3234]: E0527 03:23:56.468737 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.469659 kubelet[3234]: E0527 03:23:56.469079 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.469659 kubelet[3234]: W0527 03:23:56.469090 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.469659 kubelet[3234]: E0527 03:23:56.469123 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.469659 kubelet[3234]: E0527 03:23:56.469375 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.469659 kubelet[3234]: W0527 03:23:56.469389 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.469659 kubelet[3234]: E0527 03:23:56.469402 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.470059 kubelet[3234]: E0527 03:23:56.469917 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.470059 kubelet[3234]: W0527 03:23:56.469951 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.470059 kubelet[3234]: E0527 03:23:56.469964 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.470600 kubelet[3234]: E0527 03:23:56.470517 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.470600 kubelet[3234]: W0527 03:23:56.470534 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.470600 kubelet[3234]: E0527 03:23:56.470547 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.471281 kubelet[3234]: E0527 03:23:56.471266 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.471445 kubelet[3234]: W0527 03:23:56.471334 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.471445 kubelet[3234]: E0527 03:23:56.471350 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.472186 kubelet[3234]: E0527 03:23:56.472171 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.472352 kubelet[3234]: W0527 03:23:56.472236 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.472352 kubelet[3234]: E0527 03:23:56.472252 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.479714 kubelet[3234]: E0527 03:23:56.479659 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.479714 kubelet[3234]: W0527 03:23:56.479681 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.479996 kubelet[3234]: E0527 03:23:56.479805 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.480143 kubelet[3234]: I0527 03:23:56.479852 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b0dc86a-7a64-45f7-952b-6e3978d12edf-kubelet-dir\") pod \"csi-node-driver-27b29\" (UID: \"5b0dc86a-7a64-45f7-952b-6e3978d12edf\") " pod="calico-system/csi-node-driver-27b29" May 27 03:23:56.480509 kubelet[3234]: E0527 03:23:56.480452 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.480674 kubelet[3234]: W0527 03:23:56.480598 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.480674 kubelet[3234]: E0527 03:23:56.480619 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.480852 kubelet[3234]: I0527 03:23:56.480804 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhw2p\" (UniqueName: \"kubernetes.io/projected/5b0dc86a-7a64-45f7-952b-6e3978d12edf-kube-api-access-dhw2p\") pod \"csi-node-driver-27b29\" (UID: \"5b0dc86a-7a64-45f7-952b-6e3978d12edf\") " pod="calico-system/csi-node-driver-27b29" May 27 03:23:56.481201 kubelet[3234]: E0527 03:23:56.481187 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.481305 kubelet[3234]: W0527 03:23:56.481293 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.481457 kubelet[3234]: E0527 03:23:56.481392 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.481811 kubelet[3234]: E0527 03:23:56.481732 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.481811 kubelet[3234]: W0527 03:23:56.481750 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.481811 kubelet[3234]: E0527 03:23:56.481766 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.483999 kubelet[3234]: E0527 03:23:56.483977 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.483999 kubelet[3234]: W0527 03:23:56.483994 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.484124 kubelet[3234]: E0527 03:23:56.484009 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.484124 kubelet[3234]: I0527 03:23:56.484045 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5b0dc86a-7a64-45f7-952b-6e3978d12edf-varrun\") pod \"csi-node-driver-27b29\" (UID: \"5b0dc86a-7a64-45f7-952b-6e3978d12edf\") " pod="calico-system/csi-node-driver-27b29" May 27 03:23:56.484355 kubelet[3234]: E0527 03:23:56.484327 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.484355 kubelet[3234]: W0527 03:23:56.484351 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.484512 kubelet[3234]: E0527 03:23:56.484365 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.484512 kubelet[3234]: I0527 03:23:56.484460 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5b0dc86a-7a64-45f7-952b-6e3978d12edf-socket-dir\") pod \"csi-node-driver-27b29\" (UID: \"5b0dc86a-7a64-45f7-952b-6e3978d12edf\") " pod="calico-system/csi-node-driver-27b29" May 27 03:23:56.484803 kubelet[3234]: E0527 03:23:56.484739 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.484803 kubelet[3234]: W0527 03:23:56.484751 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.484803 kubelet[3234]: E0527 03:23:56.484765 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.485509 kubelet[3234]: E0527 03:23:56.485478 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.485688 kubelet[3234]: W0527 03:23:56.485507 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.485688 kubelet[3234]: E0527 03:23:56.485672 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.486042 kubelet[3234]: E0527 03:23:56.486024 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.486042 kubelet[3234]: W0527 03:23:56.486042 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.486256 kubelet[3234]: E0527 03:23:56.486055 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.486256 kubelet[3234]: I0527 03:23:56.486084 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5b0dc86a-7a64-45f7-952b-6e3978d12edf-registration-dir\") pod \"csi-node-driver-27b29\" (UID: \"5b0dc86a-7a64-45f7-952b-6e3978d12edf\") " pod="calico-system/csi-node-driver-27b29" May 27 03:23:56.486550 kubelet[3234]: E0527 03:23:56.486533 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.486658 kubelet[3234]: W0527 03:23:56.486551 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.486658 kubelet[3234]: E0527 03:23:56.486565 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.487162 kubelet[3234]: E0527 03:23:56.487088 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.487162 kubelet[3234]: W0527 03:23:56.487101 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.487162 kubelet[3234]: E0527 03:23:56.487114 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.487745 kubelet[3234]: E0527 03:23:56.487708 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.487745 kubelet[3234]: W0527 03:23:56.487724 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.487745 kubelet[3234]: E0527 03:23:56.487738 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.488245 kubelet[3234]: E0527 03:23:56.488220 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.488245 kubelet[3234]: W0527 03:23:56.488238 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.488349 kubelet[3234]: E0527 03:23:56.488252 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.488727 kubelet[3234]: E0527 03:23:56.488703 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.488727 kubelet[3234]: W0527 03:23:56.488719 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.488985 kubelet[3234]: E0527 03:23:56.488732 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.489146 kubelet[3234]: E0527 03:23:56.489128 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.489146 kubelet[3234]: W0527 03:23:56.489143 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.489294 kubelet[3234]: E0527 03:23:56.489156 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.517520 systemd[1]: Started cri-containerd-85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96.scope - libcontainer container 85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96. May 27 03:23:56.590614 kubelet[3234]: E0527 03:23:56.590271 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.590614 kubelet[3234]: W0527 03:23:56.590300 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.590614 kubelet[3234]: E0527 03:23:56.590339 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.590859 kubelet[3234]: E0527 03:23:56.590699 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.590859 kubelet[3234]: W0527 03:23:56.590711 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.590859 kubelet[3234]: E0527 03:23:56.590739 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.591501 kubelet[3234]: E0527 03:23:56.591219 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.591501 kubelet[3234]: W0527 03:23:56.591237 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.591501 kubelet[3234]: E0527 03:23:56.591252 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.595499 kubelet[3234]: E0527 03:23:56.592123 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.595499 kubelet[3234]: W0527 03:23:56.592142 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.595499 kubelet[3234]: E0527 03:23:56.592156 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.595499 kubelet[3234]: E0527 03:23:56.592893 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.595499 kubelet[3234]: W0527 03:23:56.592903 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.595499 kubelet[3234]: E0527 03:23:56.592921 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.595499 kubelet[3234]: E0527 03:23:56.593187 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.595499 kubelet[3234]: W0527 03:23:56.593197 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.595499 kubelet[3234]: E0527 03:23:56.593207 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.595499 kubelet[3234]: E0527 03:23:56.593418 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.595935 kubelet[3234]: W0527 03:23:56.593426 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.595935 kubelet[3234]: E0527 03:23:56.593436 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.595935 kubelet[3234]: E0527 03:23:56.593714 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.595935 kubelet[3234]: W0527 03:23:56.593724 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.595935 kubelet[3234]: E0527 03:23:56.593736 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.595935 kubelet[3234]: E0527 03:23:56.593976 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.595935 kubelet[3234]: W0527 03:23:56.593985 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.595935 kubelet[3234]: E0527 03:23:56.593996 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.595935 kubelet[3234]: E0527 03:23:56.594716 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.595935 kubelet[3234]: W0527 03:23:56.594730 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.596318 kubelet[3234]: E0527 03:23:56.594743 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.596318 kubelet[3234]: E0527 03:23:56.595273 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.596318 kubelet[3234]: W0527 03:23:56.595284 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.596318 kubelet[3234]: E0527 03:23:56.595298 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.599306 kubelet[3234]: E0527 03:23:56.596805 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.599306 kubelet[3234]: W0527 03:23:56.596817 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.599306 kubelet[3234]: E0527 03:23:56.596831 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.599306 kubelet[3234]: E0527 03:23:56.597633 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.599306 kubelet[3234]: W0527 03:23:56.597645 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.599306 kubelet[3234]: E0527 03:23:56.597659 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.599306 kubelet[3234]: E0527 03:23:56.597859 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.599306 kubelet[3234]: W0527 03:23:56.597869 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.599306 kubelet[3234]: E0527 03:23:56.597880 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.599306 kubelet[3234]: E0527 03:23:56.598051 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.599696 kubelet[3234]: W0527 03:23:56.598060 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.599696 kubelet[3234]: E0527 03:23:56.598069 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.599696 kubelet[3234]: E0527 03:23:56.598572 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.599696 kubelet[3234]: W0527 03:23:56.598584 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.599696 kubelet[3234]: E0527 03:23:56.598600 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.599696 kubelet[3234]: E0527 03:23:56.598891 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.599696 kubelet[3234]: W0527 03:23:56.598902 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.599696 kubelet[3234]: E0527 03:23:56.598915 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.602406 kubelet[3234]: E0527 03:23:56.601558 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.602406 kubelet[3234]: W0527 03:23:56.601574 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.602406 kubelet[3234]: E0527 03:23:56.601589 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.602406 kubelet[3234]: E0527 03:23:56.601836 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.602406 kubelet[3234]: W0527 03:23:56.601845 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.602406 kubelet[3234]: E0527 03:23:56.601857 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.602406 kubelet[3234]: E0527 03:23:56.602284 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.602406 kubelet[3234]: W0527 03:23:56.602296 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.602406 kubelet[3234]: E0527 03:23:56.602309 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.603035 kubelet[3234]: E0527 03:23:56.602797 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.603035 kubelet[3234]: W0527 03:23:56.602809 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.603035 kubelet[3234]: E0527 03:23:56.602825 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.605295 kubelet[3234]: E0527 03:23:56.603589 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.605295 kubelet[3234]: W0527 03:23:56.603604 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.605295 kubelet[3234]: E0527 03:23:56.603617 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.605295 kubelet[3234]: E0527 03:23:56.604576 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.605295 kubelet[3234]: W0527 03:23:56.604588 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.605295 kubelet[3234]: E0527 03:23:56.604602 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.605295 kubelet[3234]: E0527 03:23:56.605102 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.605295 kubelet[3234]: W0527 03:23:56.605121 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.605295 kubelet[3234]: E0527 03:23:56.605134 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.607752 kubelet[3234]: E0527 03:23:56.607595 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.607752 kubelet[3234]: W0527 03:23:56.607616 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.607752 kubelet[3234]: E0527 03:23:56.607630 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.639873 kubelet[3234]: E0527 03:23:56.639852 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:56.640033 kubelet[3234]: W0527 03:23:56.640016 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:56.640269 kubelet[3234]: E0527 03:23:56.640233 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:56.666289 containerd[1990]: time="2025-05-27T03:23:56.666142611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z6qpw,Uid:14a74d51-bb07-46e0-9602-f241de6f59dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96\"" May 27 03:23:57.752994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519632325.mount: Deactivated successfully. May 27 03:23:58.550598 containerd[1990]: time="2025-05-27T03:23:58.550549385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:58.552143 containerd[1990]: time="2025-05-27T03:23:58.552105546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:23:58.555252 containerd[1990]: time="2025-05-27T03:23:58.554146329Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:58.557148 containerd[1990]: time="2025-05-27T03:23:58.557113824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:58.557838 containerd[1990]: time="2025-05-27T03:23:58.557809347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.251205288s" May 27 03:23:58.557961 containerd[1990]: time="2025-05-27T03:23:58.557943409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:23:58.559890 containerd[1990]: time="2025-05-27T03:23:58.559864320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:23:58.568257 kubelet[3234]: E0527 03:23:58.568186 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-27b29" podUID="5b0dc86a-7a64-45f7-952b-6e3978d12edf" May 27 03:23:58.600989 containerd[1990]: time="2025-05-27T03:23:58.600725737Z" level=info msg="CreateContainer within sandbox \"51d6e9f53e9de45c35c2ca7be5856e698a90f9aaec692b5fbad87c6f0b73332a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:23:58.617131 containerd[1990]: time="2025-05-27T03:23:58.617092359Z" level=info msg="Container 40dc16ed998df8a22d35ef66c79faf73bdd0ad25a73a4e84d572c1d596a1f223: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:58.634559 containerd[1990]: time="2025-05-27T03:23:58.634510180Z" level=info msg="CreateContainer within sandbox \"51d6e9f53e9de45c35c2ca7be5856e698a90f9aaec692b5fbad87c6f0b73332a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"40dc16ed998df8a22d35ef66c79faf73bdd0ad25a73a4e84d572c1d596a1f223\"" May 27 03:23:58.635155 containerd[1990]: time="2025-05-27T03:23:58.635102677Z" level=info msg="StartContainer for \"40dc16ed998df8a22d35ef66c79faf73bdd0ad25a73a4e84d572c1d596a1f223\"" May 27 03:23:58.636248 containerd[1990]: time="2025-05-27T03:23:58.636218675Z" level=info msg="connecting to shim 40dc16ed998df8a22d35ef66c79faf73bdd0ad25a73a4e84d572c1d596a1f223" address="unix:///run/containerd/s/6b23440e3c9f905360a9659bb1adf44f7755c5405bc44e69abdb5e1d3200cc74" protocol=ttrpc version=3 May 27 03:23:58.697687 systemd[1]: Started cri-containerd-40dc16ed998df8a22d35ef66c79faf73bdd0ad25a73a4e84d572c1d596a1f223.scope - libcontainer container 40dc16ed998df8a22d35ef66c79faf73bdd0ad25a73a4e84d572c1d596a1f223. May 27 03:23:58.754404 containerd[1990]: time="2025-05-27T03:23:58.754351607Z" level=info msg="StartContainer for \"40dc16ed998df8a22d35ef66c79faf73bdd0ad25a73a4e84d572c1d596a1f223\" returns successfully" May 27 03:23:59.697882 kubelet[3234]: E0527 03:23:59.697532 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.697882 kubelet[3234]: W0527 03:23:59.697555 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.697882 kubelet[3234]: E0527 03:23:59.697573 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.698744 kubelet[3234]: E0527 03:23:59.698599 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.698744 kubelet[3234]: W0527 03:23:59.698615 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.698744 kubelet[3234]: E0527 03:23:59.698631 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.699290 kubelet[3234]: I0527 03:23:59.699251 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-696b77c977-wg4w5" podStartSLOduration=2.445755472 podStartE2EDuration="4.699235909s" podCreationTimestamp="2025-05-27 03:23:55 +0000 UTC" firstStartedPulling="2025-05-27 03:23:56.305478971 +0000 UTC m=+19.901132797" lastFinishedPulling="2025-05-27 03:23:58.558959412 +0000 UTC m=+22.154613234" observedRunningTime="2025-05-27 03:23:59.696270367 +0000 UTC m=+23.291924208" watchObservedRunningTime="2025-05-27 03:23:59.699235909 +0000 UTC m=+23.294889768" May 27 03:23:59.700087 kubelet[3234]: E0527 03:23:59.699762 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.700420 kubelet[3234]: W0527 03:23:59.700157 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.700420 kubelet[3234]: E0527 03:23:59.700176 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.701374 kubelet[3234]: E0527 03:23:59.701250 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.701374 kubelet[3234]: W0527 03:23:59.701264 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.701374 kubelet[3234]: E0527 03:23:59.701277 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.701864 kubelet[3234]: E0527 03:23:59.701804 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.701864 kubelet[3234]: W0527 03:23:59.701816 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.701864 kubelet[3234]: E0527 03:23:59.701827 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.702390 kubelet[3234]: E0527 03:23:59.702370 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.702695 kubelet[3234]: W0527 03:23:59.702501 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.702695 kubelet[3234]: E0527 03:23:59.702516 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.703199 kubelet[3234]: E0527 03:23:59.703091 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.703199 kubelet[3234]: W0527 03:23:59.703103 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.703199 kubelet[3234]: E0527 03:23:59.703113 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.703709 kubelet[3234]: E0527 03:23:59.703591 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.703709 kubelet[3234]: W0527 03:23:59.703608 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.703709 kubelet[3234]: E0527 03:23:59.703620 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.704301 kubelet[3234]: E0527 03:23:59.704284 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.704301 kubelet[3234]: W0527 03:23:59.704299 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.704374 kubelet[3234]: E0527 03:23:59.704310 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.704968 kubelet[3234]: E0527 03:23:59.704943 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.704968 kubelet[3234]: W0527 03:23:59.704956 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.704968 kubelet[3234]: E0527 03:23:59.704968 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.705500 kubelet[3234]: E0527 03:23:59.705448 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.705620 kubelet[3234]: W0527 03:23:59.705600 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.705620 kubelet[3234]: E0527 03:23:59.705615 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.706232 kubelet[3234]: E0527 03:23:59.706208 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.706232 kubelet[3234]: W0527 03:23:59.706224 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.706352 kubelet[3234]: E0527 03:23:59.706234 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.706880 kubelet[3234]: E0527 03:23:59.706743 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.706880 kubelet[3234]: W0527 03:23:59.706755 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.706880 kubelet[3234]: E0527 03:23:59.706765 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.707463 kubelet[3234]: E0527 03:23:59.707331 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.707463 kubelet[3234]: W0527 03:23:59.707343 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.707463 kubelet[3234]: E0527 03:23:59.707353 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.708233 kubelet[3234]: E0527 03:23:59.708078 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.708233 kubelet[3234]: W0527 03:23:59.708091 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.708233 kubelet[3234]: E0527 03:23:59.708101 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.716367 kubelet[3234]: E0527 03:23:59.716308 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.716367 kubelet[3234]: W0527 03:23:59.716364 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.716568 kubelet[3234]: E0527 03:23:59.716384 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.717096 kubelet[3234]: E0527 03:23:59.717078 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.717096 kubelet[3234]: W0527 03:23:59.717093 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.717762 kubelet[3234]: E0527 03:23:59.717107 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.717960 kubelet[3234]: E0527 03:23:59.717944 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.717992 kubelet[3234]: W0527 03:23:59.717961 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.717992 kubelet[3234]: E0527 03:23:59.717973 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.718868 kubelet[3234]: E0527 03:23:59.718767 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.718868 kubelet[3234]: W0527 03:23:59.718783 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.718957 kubelet[3234]: E0527 03:23:59.718795 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.719742 kubelet[3234]: E0527 03:23:59.719698 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.719742 kubelet[3234]: W0527 03:23:59.719711 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.719742 kubelet[3234]: E0527 03:23:59.719721 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.720349 kubelet[3234]: E0527 03:23:59.720332 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.720349 kubelet[3234]: W0527 03:23:59.720348 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.720426 kubelet[3234]: E0527 03:23:59.720357 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.722203 kubelet[3234]: E0527 03:23:59.722181 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.722203 kubelet[3234]: W0527 03:23:59.722199 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.722278 kubelet[3234]: E0527 03:23:59.722209 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.722555 kubelet[3234]: E0527 03:23:59.722445 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.722555 kubelet[3234]: W0527 03:23:59.722455 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.722627 kubelet[3234]: E0527 03:23:59.722561 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.723386 kubelet[3234]: E0527 03:23:59.723357 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.723386 kubelet[3234]: W0527 03:23:59.723377 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.723464 kubelet[3234]: E0527 03:23:59.723389 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.724377 kubelet[3234]: E0527 03:23:59.724354 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.724377 kubelet[3234]: W0527 03:23:59.724367 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.724377 kubelet[3234]: E0527 03:23:59.724377 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.725205 kubelet[3234]: E0527 03:23:59.725174 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.725205 kubelet[3234]: W0527 03:23:59.725186 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.725205 kubelet[3234]: E0527 03:23:59.725196 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.725919 kubelet[3234]: E0527 03:23:59.725852 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.725919 kubelet[3234]: W0527 03:23:59.725866 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.725919 kubelet[3234]: E0527 03:23:59.725876 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.727399 kubelet[3234]: E0527 03:23:59.727375 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.727553 kubelet[3234]: W0527 03:23:59.727536 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.727586 kubelet[3234]: E0527 03:23:59.727556 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.728819 kubelet[3234]: E0527 03:23:59.728777 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.728819 kubelet[3234]: W0527 03:23:59.728791 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.728819 kubelet[3234]: E0527 03:23:59.728802 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.730559 kubelet[3234]: E0527 03:23:59.730530 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.730559 kubelet[3234]: W0527 03:23:59.730544 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.730559 kubelet[3234]: E0527 03:23:59.730556 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.731302 kubelet[3234]: E0527 03:23:59.731285 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.731302 kubelet[3234]: W0527 03:23:59.731299 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.731385 kubelet[3234]: E0527 03:23:59.731310 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.732737 kubelet[3234]: E0527 03:23:59.732664 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.732877 kubelet[3234]: W0527 03:23:59.732859 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.732914 kubelet[3234]: E0527 03:23:59.732881 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.734497 kubelet[3234]: E0527 03:23:59.734135 3234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:59.734497 kubelet[3234]: W0527 03:23:59.734148 3234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:59.734497 kubelet[3234]: E0527 03:23:59.734160 3234 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:59.775735 containerd[1990]: time="2025-05-27T03:23:59.775700003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:59.781023 containerd[1990]: time="2025-05-27T03:23:59.780991558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:23:59.785298 containerd[1990]: time="2025-05-27T03:23:59.785269511Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:59.793779 containerd[1990]: time="2025-05-27T03:23:59.793721539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:59.795208 containerd[1990]: time="2025-05-27T03:23:59.795070764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.232267627s" May 27 03:23:59.795603 containerd[1990]: time="2025-05-27T03:23:59.795139476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:23:59.804356 containerd[1990]: time="2025-05-27T03:23:59.804273199Z" level=info msg="CreateContainer within sandbox \"85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:23:59.825648 containerd[1990]: time="2025-05-27T03:23:59.825565796Z" level=info msg="Container 7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:59.845749 containerd[1990]: time="2025-05-27T03:23:59.845706472Z" level=info msg="CreateContainer within sandbox \"85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791\"" May 27 03:23:59.848618 containerd[1990]: time="2025-05-27T03:23:59.847023296Z" level=info msg="StartContainer for \"7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791\"" May 27 03:23:59.849877 containerd[1990]: time="2025-05-27T03:23:59.849843081Z" level=info msg="connecting to shim 7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791" address="unix:///run/containerd/s/70c6558698ef570a8ef2f4613d6ff3ff01da250d7693bed61c8d832a7b19f5c9" protocol=ttrpc version=3 May 27 03:23:59.886755 systemd[1]: Started cri-containerd-7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791.scope - libcontainer container 7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791. May 27 03:23:59.942774 containerd[1990]: time="2025-05-27T03:23:59.942733983Z" level=info msg="StartContainer for \"7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791\" returns successfully" May 27 03:23:59.949607 systemd[1]: cri-containerd-7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791.scope: Deactivated successfully. May 27 03:23:59.949948 systemd[1]: cri-containerd-7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791.scope: Consumed 34ms CPU time, 6M memory peak, 4.2M written to disk. May 27 03:23:59.979749 containerd[1990]: time="2025-05-27T03:23:59.979562460Z" level=info msg="received exit event container_id:\"7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791\" id:\"7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791\" pid:4189 exited_at:{seconds:1748316239 nanos:957539589}" May 27 03:23:59.991756 containerd[1990]: time="2025-05-27T03:23:59.991712933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791\" id:\"7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791\" pid:4189 exited_at:{seconds:1748316239 nanos:957539589}" May 27 03:24:00.054307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7307751232fb3c0d4901c8f2b263d86e5e5ed772004bfc3b841876ba6fd50791-rootfs.mount: Deactivated successfully. May 27 03:24:00.561983 kubelet[3234]: E0527 03:24:00.561378 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-27b29" podUID="5b0dc86a-7a64-45f7-952b-6e3978d12edf" May 27 03:24:00.684205 containerd[1990]: time="2025-05-27T03:24:00.684160736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:24:02.563378 kubelet[3234]: E0527 03:24:02.561720 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-27b29" podUID="5b0dc86a-7a64-45f7-952b-6e3978d12edf" May 27 03:24:03.689648 containerd[1990]: time="2025-05-27T03:24:03.689596611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.691679 containerd[1990]: time="2025-05-27T03:24:03.691624621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:24:03.694819 containerd[1990]: time="2025-05-27T03:24:03.693883400Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.697513 containerd[1990]: time="2025-05-27T03:24:03.697459999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.698263 containerd[1990]: time="2025-05-27T03:24:03.698233550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.013866164s" May 27 03:24:03.698343 containerd[1990]: time="2025-05-27T03:24:03.698273465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:24:03.708990 containerd[1990]: time="2025-05-27T03:24:03.708957675Z" level=info msg="CreateContainer within sandbox \"85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:24:03.722074 containerd[1990]: time="2025-05-27T03:24:03.722032886Z" level=info msg="Container fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:03.756350 containerd[1990]: time="2025-05-27T03:24:03.756091832Z" level=info msg="CreateContainer within sandbox \"85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af\"" May 27 03:24:03.759345 containerd[1990]: time="2025-05-27T03:24:03.759309011Z" level=info msg="StartContainer for \"fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af\"" May 27 03:24:03.762910 containerd[1990]: time="2025-05-27T03:24:03.762860322Z" level=info msg="connecting to shim fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af" address="unix:///run/containerd/s/70c6558698ef570a8ef2f4613d6ff3ff01da250d7693bed61c8d832a7b19f5c9" protocol=ttrpc version=3 May 27 03:24:03.799716 systemd[1]: Started cri-containerd-fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af.scope - libcontainer container fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af. May 27 03:24:03.857187 containerd[1990]: time="2025-05-27T03:24:03.857132519Z" level=info msg="StartContainer for \"fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af\" returns successfully" May 27 03:24:04.561844 kubelet[3234]: E0527 03:24:04.561788 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-27b29" podUID="5b0dc86a-7a64-45f7-952b-6e3978d12edf" May 27 03:24:04.636267 systemd[1]: cri-containerd-fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af.scope: Deactivated successfully. May 27 03:24:04.637663 containerd[1990]: time="2025-05-27T03:24:04.636863527Z" level=info msg="received exit event container_id:\"fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af\" id:\"fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af\" pid:4248 exited_at:{seconds:1748316244 nanos:635615468}" May 27 03:24:04.637595 systemd[1]: cri-containerd-fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af.scope: Consumed 525ms CPU time, 169M memory peak, 6.3M read from disk, 170.9M written to disk. May 27 03:24:04.639462 containerd[1990]: time="2025-05-27T03:24:04.639416517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af\" id:\"fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af\" pid:4248 exited_at:{seconds:1748316244 nanos:635615468}" May 27 03:24:04.675233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fec4edcaf26d223648b5f21e47e9249c08f041e7b987b85d0761749508d666af-rootfs.mount: Deactivated successfully. May 27 03:24:04.757872 kubelet[3234]: I0527 03:24:04.757061 3234 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:24:04.907153 systemd[1]: Created slice kubepods-besteffort-pod8c32713b_5a33_4442_a018_326711c1be7b.slice - libcontainer container kubepods-besteffort-pod8c32713b_5a33_4442_a018_326711c1be7b.slice. May 27 03:24:04.924278 systemd[1]: Created slice kubepods-burstable-pod47be3583_df5e_4c3c_9ca6_24fbbdb1d253.slice - libcontainer container kubepods-burstable-pod47be3583_df5e_4c3c_9ca6_24fbbdb1d253.slice. May 27 03:24:04.944601 systemd[1]: Created slice kubepods-besteffort-podd81b5d20_78b9_4ebb_9eb4_dc51405b66e7.slice - libcontainer container kubepods-besteffort-podd81b5d20_78b9_4ebb_9eb4_dc51405b66e7.slice. May 27 03:24:04.961331 systemd[1]: Created slice kubepods-burstable-pod466fd2f4_81f8_4a10_914e_b412ae83f7aa.slice - libcontainer container kubepods-burstable-pod466fd2f4_81f8_4a10_914e_b412ae83f7aa.slice. May 27 03:24:04.971617 systemd[1]: Created slice kubepods-besteffort-pod2b70e4e3_90ce_46ab_b026_f5df1c2548db.slice - libcontainer container kubepods-besteffort-pod2b70e4e3_90ce_46ab_b026_f5df1c2548db.slice. May 27 03:24:04.975010 kubelet[3234]: I0527 03:24:04.974946 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2b70e4e3-90ce-46ab-b026-f5df1c2548db-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-ghcj4\" (UID: \"2b70e4e3-90ce-46ab-b026-f5df1c2548db\") " pod="calico-system/goldmane-78d55f7ddc-ghcj4" May 27 03:24:04.975400 kubelet[3234]: I0527 03:24:04.975109 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt7fr\" (UniqueName: \"kubernetes.io/projected/2b70e4e3-90ce-46ab-b026-f5df1c2548db-kube-api-access-rt7fr\") pod \"goldmane-78d55f7ddc-ghcj4\" (UID: \"2b70e4e3-90ce-46ab-b026-f5df1c2548db\") " pod="calico-system/goldmane-78d55f7ddc-ghcj4" May 27 03:24:04.975400 kubelet[3234]: I0527 03:24:04.975239 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b9b148-d786-4df2-a9e7-5d0369298029-tigera-ca-bundle\") pod \"calico-kube-controllers-744b9f4d8b-xp4nf\" (UID: \"54b9b148-d786-4df2-a9e7-5d0369298029\") " pod="calico-system/calico-kube-controllers-744b9f4d8b-xp4nf" May 27 03:24:04.975400 kubelet[3234]: I0527 03:24:04.975271 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56ee3bee-606e-4fbc-9ef0-e5652e2baa12-calico-apiserver-certs\") pod \"calico-apiserver-699c564668-bfrlq\" (UID: \"56ee3bee-606e-4fbc-9ef0-e5652e2baa12\") " pod="calico-apiserver/calico-apiserver-699c564668-bfrlq" May 27 03:24:04.976185 kubelet[3234]: I0527 03:24:04.975296 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466fd2f4-81f8-4a10-914e-b412ae83f7aa-config-volume\") pod \"coredns-674b8bbfcf-phh6k\" (UID: \"466fd2f4-81f8-4a10-914e-b412ae83f7aa\") " pod="kube-system/coredns-674b8bbfcf-phh6k" May 27 03:24:04.976185 kubelet[3234]: I0527 03:24:04.975519 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbtsk\" (UniqueName: \"kubernetes.io/projected/466fd2f4-81f8-4a10-914e-b412ae83f7aa-kube-api-access-bbtsk\") pod \"coredns-674b8bbfcf-phh6k\" (UID: \"466fd2f4-81f8-4a10-914e-b412ae83f7aa\") " pod="kube-system/coredns-674b8bbfcf-phh6k" May 27 03:24:04.976185 kubelet[3234]: I0527 03:24:04.975755 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47be3583-df5e-4c3c-9ca6-24fbbdb1d253-config-volume\") pod \"coredns-674b8bbfcf-q2bwz\" (UID: \"47be3583-df5e-4c3c-9ca6-24fbbdb1d253\") " pod="kube-system/coredns-674b8bbfcf-q2bwz" May 27 03:24:04.976185 kubelet[3234]: I0527 03:24:04.975790 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d81b5d20-78b9-4ebb-9eb4-dc51405b66e7-calico-apiserver-certs\") pod \"calico-apiserver-699c564668-q4mpt\" (UID: \"d81b5d20-78b9-4ebb-9eb4-dc51405b66e7\") " pod="calico-apiserver/calico-apiserver-699c564668-q4mpt" May 27 03:24:04.976185 kubelet[3234]: I0527 03:24:04.975816 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b70e4e3-90ce-46ab-b026-f5df1c2548db-config\") pod \"goldmane-78d55f7ddc-ghcj4\" (UID: \"2b70e4e3-90ce-46ab-b026-f5df1c2548db\") " pod="calico-system/goldmane-78d55f7ddc-ghcj4" May 27 03:24:04.978084 kubelet[3234]: I0527 03:24:04.975842 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c32713b-5a33-4442-a018-326711c1be7b-whisker-ca-bundle\") pod \"whisker-6cc75cb46c-rqw4c\" (UID: \"8c32713b-5a33-4442-a018-326711c1be7b\") " pod="calico-system/whisker-6cc75cb46c-rqw4c" May 27 03:24:04.978084 kubelet[3234]: I0527 03:24:04.975870 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx28t\" (UniqueName: \"kubernetes.io/projected/47be3583-df5e-4c3c-9ca6-24fbbdb1d253-kube-api-access-zx28t\") pod \"coredns-674b8bbfcf-q2bwz\" (UID: \"47be3583-df5e-4c3c-9ca6-24fbbdb1d253\") " pod="kube-system/coredns-674b8bbfcf-q2bwz" May 27 03:24:04.978084 kubelet[3234]: I0527 03:24:04.977272 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqzz\" (UniqueName: \"kubernetes.io/projected/54b9b148-d786-4df2-a9e7-5d0369298029-kube-api-access-dpqzz\") pod \"calico-kube-controllers-744b9f4d8b-xp4nf\" (UID: \"54b9b148-d786-4df2-a9e7-5d0369298029\") " pod="calico-system/calico-kube-controllers-744b9f4d8b-xp4nf" May 27 03:24:04.978084 kubelet[3234]: I0527 03:24:04.977301 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c32713b-5a33-4442-a018-326711c1be7b-whisker-backend-key-pair\") pod \"whisker-6cc75cb46c-rqw4c\" (UID: \"8c32713b-5a33-4442-a018-326711c1be7b\") " pod="calico-system/whisker-6cc75cb46c-rqw4c" May 27 03:24:04.978084 kubelet[3234]: I0527 03:24:04.977329 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbf9\" (UniqueName: \"kubernetes.io/projected/d81b5d20-78b9-4ebb-9eb4-dc51405b66e7-kube-api-access-fvbf9\") pod \"calico-apiserver-699c564668-q4mpt\" (UID: \"d81b5d20-78b9-4ebb-9eb4-dc51405b66e7\") " pod="calico-apiserver/calico-apiserver-699c564668-q4mpt" May 27 03:24:04.978264 kubelet[3234]: I0527 03:24:04.977354 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b70e4e3-90ce-46ab-b026-f5df1c2548db-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-ghcj4\" (UID: \"2b70e4e3-90ce-46ab-b026-f5df1c2548db\") " pod="calico-system/goldmane-78d55f7ddc-ghcj4" May 27 03:24:04.978264 kubelet[3234]: I0527 03:24:04.977382 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvcv\" (UniqueName: \"kubernetes.io/projected/56ee3bee-606e-4fbc-9ef0-e5652e2baa12-kube-api-access-xqvcv\") pod \"calico-apiserver-699c564668-bfrlq\" (UID: \"56ee3bee-606e-4fbc-9ef0-e5652e2baa12\") " pod="calico-apiserver/calico-apiserver-699c564668-bfrlq" May 27 03:24:04.978264 kubelet[3234]: I0527 03:24:04.977410 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm8wd\" (UniqueName: \"kubernetes.io/projected/8c32713b-5a33-4442-a018-326711c1be7b-kube-api-access-qm8wd\") pod \"whisker-6cc75cb46c-rqw4c\" (UID: \"8c32713b-5a33-4442-a018-326711c1be7b\") " pod="calico-system/whisker-6cc75cb46c-rqw4c" May 27 03:24:04.986457 systemd[1]: Created slice kubepods-besteffort-pod54b9b148_d786_4df2_a9e7_5d0369298029.slice - libcontainer container kubepods-besteffort-pod54b9b148_d786_4df2_a9e7_5d0369298029.slice. May 27 03:24:04.994980 systemd[1]: Created slice kubepods-besteffort-pod56ee3bee_606e_4fbc_9ef0_e5652e2baa12.slice - libcontainer container kubepods-besteffort-pod56ee3bee_606e_4fbc_9ef0_e5652e2baa12.slice. May 27 03:24:05.223304 containerd[1990]: time="2025-05-27T03:24:05.223186744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc75cb46c-rqw4c,Uid:8c32713b-5a33-4442-a018-326711c1be7b,Namespace:calico-system,Attempt:0,}" May 27 03:24:05.241532 containerd[1990]: time="2025-05-27T03:24:05.241290364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q2bwz,Uid:47be3583-df5e-4c3c-9ca6-24fbbdb1d253,Namespace:kube-system,Attempt:0,}" May 27 03:24:05.256875 containerd[1990]: time="2025-05-27T03:24:05.256814952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-q4mpt,Uid:d81b5d20-78b9-4ebb-9eb4-dc51405b66e7,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:05.285816 containerd[1990]: time="2025-05-27T03:24:05.285583452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ghcj4,Uid:2b70e4e3-90ce-46ab-b026-f5df1c2548db,Namespace:calico-system,Attempt:0,}" May 27 03:24:05.287501 containerd[1990]: time="2025-05-27T03:24:05.287441219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-phh6k,Uid:466fd2f4-81f8-4a10-914e-b412ae83f7aa,Namespace:kube-system,Attempt:0,}" May 27 03:24:05.293402 containerd[1990]: time="2025-05-27T03:24:05.293370087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744b9f4d8b-xp4nf,Uid:54b9b148-d786-4df2-a9e7-5d0369298029,Namespace:calico-system,Attempt:0,}" May 27 03:24:05.300608 containerd[1990]: time="2025-05-27T03:24:05.300185795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-bfrlq,Uid:56ee3bee-606e-4fbc-9ef0-e5652e2baa12,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:05.536520 containerd[1990]: time="2025-05-27T03:24:05.535643463Z" level=error msg="Failed to destroy network for sandbox \"68f03835c8d15034c4012f521596ec6b8869502db40faa6a0ca98b441f8fd075\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.538215 containerd[1990]: time="2025-05-27T03:24:05.538158494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744b9f4d8b-xp4nf,Uid:54b9b148-d786-4df2-a9e7-5d0369298029,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f03835c8d15034c4012f521596ec6b8869502db40faa6a0ca98b441f8fd075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.538431 kubelet[3234]: E0527 03:24:05.538394 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f03835c8d15034c4012f521596ec6b8869502db40faa6a0ca98b441f8fd075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.538514 kubelet[3234]: E0527 03:24:05.538458 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f03835c8d15034c4012f521596ec6b8869502db40faa6a0ca98b441f8fd075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-744b9f4d8b-xp4nf" May 27 03:24:05.538514 kubelet[3234]: E0527 03:24:05.538478 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f03835c8d15034c4012f521596ec6b8869502db40faa6a0ca98b441f8fd075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-744b9f4d8b-xp4nf" May 27 03:24:05.538587 kubelet[3234]: E0527 03:24:05.538550 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-744b9f4d8b-xp4nf_calico-system(54b9b148-d786-4df2-a9e7-5d0369298029)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-744b9f4d8b-xp4nf_calico-system(54b9b148-d786-4df2-a9e7-5d0369298029)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f03835c8d15034c4012f521596ec6b8869502db40faa6a0ca98b441f8fd075\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-744b9f4d8b-xp4nf" podUID="54b9b148-d786-4df2-a9e7-5d0369298029" May 27 03:24:05.553187 containerd[1990]: time="2025-05-27T03:24:05.553140078Z" level=error msg="Failed to destroy network for sandbox \"03e1eadf56a6965cda08f3f3a1bf25ed98f9c12041cccbe898bd141191f378f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.556033 containerd[1990]: time="2025-05-27T03:24:05.555760228Z" level=error msg="Failed to destroy network for sandbox \"f9a789ee681195a97baf0df90085f08ebb69d3229b14cc0fd826018b4a480360\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.556230 containerd[1990]: time="2025-05-27T03:24:05.556203819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q2bwz,Uid:47be3583-df5e-4c3c-9ca6-24fbbdb1d253,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e1eadf56a6965cda08f3f3a1bf25ed98f9c12041cccbe898bd141191f378f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.556570 kubelet[3234]: E0527 03:24:05.556535 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e1eadf56a6965cda08f3f3a1bf25ed98f9c12041cccbe898bd141191f378f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.556651 kubelet[3234]: E0527 03:24:05.556602 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e1eadf56a6965cda08f3f3a1bf25ed98f9c12041cccbe898bd141191f378f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q2bwz" May 27 03:24:05.556651 kubelet[3234]: E0527 03:24:05.556622 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e1eadf56a6965cda08f3f3a1bf25ed98f9c12041cccbe898bd141191f378f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q2bwz" May 27 03:24:05.556743 kubelet[3234]: E0527 03:24:05.556680 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q2bwz_kube-system(47be3583-df5e-4c3c-9ca6-24fbbdb1d253)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q2bwz_kube-system(47be3583-df5e-4c3c-9ca6-24fbbdb1d253)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03e1eadf56a6965cda08f3f3a1bf25ed98f9c12041cccbe898bd141191f378f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q2bwz" podUID="47be3583-df5e-4c3c-9ca6-24fbbdb1d253" May 27 03:24:05.582309 containerd[1990]: time="2025-05-27T03:24:05.560984718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-bfrlq,Uid:56ee3bee-606e-4fbc-9ef0-e5652e2baa12,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a789ee681195a97baf0df90085f08ebb69d3229b14cc0fd826018b4a480360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.582529 containerd[1990]: time="2025-05-27T03:24:05.566931362Z" level=error msg="Failed to destroy network for sandbox \"7d798cc1c863b07f832f0587520c35be1090e1bd6019245dc44c8152ab108dde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.582624 containerd[1990]: time="2025-05-27T03:24:05.569885739Z" level=error msg="Failed to destroy network for sandbox \"a02b6d28ff48d85549137bceb5c70f7b4a9a8823ec51ee65741f72a966ab9f1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.583243 kubelet[3234]: E0527 03:24:05.583029 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a789ee681195a97baf0df90085f08ebb69d3229b14cc0fd826018b4a480360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.583243 kubelet[3234]: E0527 03:24:05.583096 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a789ee681195a97baf0df90085f08ebb69d3229b14cc0fd826018b4a480360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c564668-bfrlq" May 27 03:24:05.583243 kubelet[3234]: E0527 03:24:05.583123 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a789ee681195a97baf0df90085f08ebb69d3229b14cc0fd826018b4a480360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c564668-bfrlq" May 27 03:24:05.583851 containerd[1990]: time="2025-05-27T03:24:05.574267513Z" level=error msg="Failed to destroy network for sandbox \"122f53cc8c8181d5fc805032412839b7f99b79e7007a3ef2a011f3e09200771e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.583851 containerd[1990]: time="2025-05-27T03:24:05.577444412Z" level=error msg="Failed to destroy network for sandbox \"b15361521c268e956af2f19baf3c91c9b059bad83fb62ec5ba505485374b3232\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.584040 kubelet[3234]: E0527 03:24:05.583194 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699c564668-bfrlq_calico-apiserver(56ee3bee-606e-4fbc-9ef0-e5652e2baa12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699c564668-bfrlq_calico-apiserver(56ee3bee-606e-4fbc-9ef0-e5652e2baa12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a789ee681195a97baf0df90085f08ebb69d3229b14cc0fd826018b4a480360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699c564668-bfrlq" podUID="56ee3bee-606e-4fbc-9ef0-e5652e2baa12" May 27 03:24:05.584969 containerd[1990]: time="2025-05-27T03:24:05.584925423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-q4mpt,Uid:d81b5d20-78b9-4ebb-9eb4-dc51405b66e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d798cc1c863b07f832f0587520c35be1090e1bd6019245dc44c8152ab108dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.585349 kubelet[3234]: E0527 03:24:05.585276 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d798cc1c863b07f832f0587520c35be1090e1bd6019245dc44c8152ab108dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.585430 kubelet[3234]: E0527 03:24:05.585362 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d798cc1c863b07f832f0587520c35be1090e1bd6019245dc44c8152ab108dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c564668-q4mpt" May 27 03:24:05.585430 kubelet[3234]: E0527 03:24:05.585393 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d798cc1c863b07f832f0587520c35be1090e1bd6019245dc44c8152ab108dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c564668-q4mpt" May 27 03:24:05.585547 kubelet[3234]: E0527 03:24:05.585452 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699c564668-q4mpt_calico-apiserver(d81b5d20-78b9-4ebb-9eb4-dc51405b66e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699c564668-q4mpt_calico-apiserver(d81b5d20-78b9-4ebb-9eb4-dc51405b66e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d798cc1c863b07f832f0587520c35be1090e1bd6019245dc44c8152ab108dde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699c564668-q4mpt" podUID="d81b5d20-78b9-4ebb-9eb4-dc51405b66e7" May 27 03:24:05.587126 containerd[1990]: time="2025-05-27T03:24:05.587053244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc75cb46c-rqw4c,Uid:8c32713b-5a33-4442-a018-326711c1be7b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b6d28ff48d85549137bceb5c70f7b4a9a8823ec51ee65741f72a966ab9f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.587383 kubelet[3234]: E0527 03:24:05.587272 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b6d28ff48d85549137bceb5c70f7b4a9a8823ec51ee65741f72a966ab9f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.587383 kubelet[3234]: E0527 03:24:05.587342 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b6d28ff48d85549137bceb5c70f7b4a9a8823ec51ee65741f72a966ab9f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cc75cb46c-rqw4c" May 27 03:24:05.587383 kubelet[3234]: E0527 03:24:05.587373 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b6d28ff48d85549137bceb5c70f7b4a9a8823ec51ee65741f72a966ab9f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cc75cb46c-rqw4c" May 27 03:24:05.587697 kubelet[3234]: E0527 03:24:05.587429 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cc75cb46c-rqw4c_calico-system(8c32713b-5a33-4442-a018-326711c1be7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cc75cb46c-rqw4c_calico-system(8c32713b-5a33-4442-a018-326711c1be7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a02b6d28ff48d85549137bceb5c70f7b4a9a8823ec51ee65741f72a966ab9f1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cc75cb46c-rqw4c" podUID="8c32713b-5a33-4442-a018-326711c1be7b" May 27 03:24:05.589115 containerd[1990]: time="2025-05-27T03:24:05.589072997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-phh6k,Uid:466fd2f4-81f8-4a10-914e-b412ae83f7aa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f53cc8c8181d5fc805032412839b7f99b79e7007a3ef2a011f3e09200771e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.589456 kubelet[3234]: E0527 03:24:05.589427 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f53cc8c8181d5fc805032412839b7f99b79e7007a3ef2a011f3e09200771e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.589561 kubelet[3234]: E0527 03:24:05.589523 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f53cc8c8181d5fc805032412839b7f99b79e7007a3ef2a011f3e09200771e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-phh6k" May 27 03:24:05.589613 kubelet[3234]: E0527 03:24:05.589577 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f53cc8c8181d5fc805032412839b7f99b79e7007a3ef2a011f3e09200771e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-phh6k" May 27 03:24:05.589799 kubelet[3234]: E0527 03:24:05.589746 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-phh6k_kube-system(466fd2f4-81f8-4a10-914e-b412ae83f7aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-phh6k_kube-system(466fd2f4-81f8-4a10-914e-b412ae83f7aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"122f53cc8c8181d5fc805032412839b7f99b79e7007a3ef2a011f3e09200771e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-phh6k" podUID="466fd2f4-81f8-4a10-914e-b412ae83f7aa" May 27 03:24:05.591194 containerd[1990]: time="2025-05-27T03:24:05.591159527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ghcj4,Uid:2b70e4e3-90ce-46ab-b026-f5df1c2548db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15361521c268e956af2f19baf3c91c9b059bad83fb62ec5ba505485374b3232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.591367 kubelet[3234]: E0527 03:24:05.591315 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15361521c268e956af2f19baf3c91c9b059bad83fb62ec5ba505485374b3232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:05.591468 kubelet[3234]: E0527 03:24:05.591383 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15361521c268e956af2f19baf3c91c9b059bad83fb62ec5ba505485374b3232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-ghcj4" May 27 03:24:05.591468 kubelet[3234]: E0527 03:24:05.591408 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15361521c268e956af2f19baf3c91c9b059bad83fb62ec5ba505485374b3232\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-ghcj4" May 27 03:24:05.591468 kubelet[3234]: E0527 03:24:05.591451 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-ghcj4_calico-system(2b70e4e3-90ce-46ab-b026-f5df1c2548db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-ghcj4_calico-system(2b70e4e3-90ce-46ab-b026-f5df1c2548db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b15361521c268e956af2f19baf3c91c9b059bad83fb62ec5ba505485374b3232\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:24:05.765382 containerd[1990]: time="2025-05-27T03:24:05.765326413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:24:06.568124 systemd[1]: Created slice kubepods-besteffort-pod5b0dc86a_7a64_45f7_952b_6e3978d12edf.slice - libcontainer container kubepods-besteffort-pod5b0dc86a_7a64_45f7_952b_6e3978d12edf.slice. May 27 03:24:06.570557 containerd[1990]: time="2025-05-27T03:24:06.570518930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-27b29,Uid:5b0dc86a-7a64-45f7-952b-6e3978d12edf,Namespace:calico-system,Attempt:0,}" May 27 03:24:06.659776 containerd[1990]: time="2025-05-27T03:24:06.659733269Z" level=error msg="Failed to destroy network for sandbox \"f7206212ef5ac393f8a77965d645ccdae9a1ae057a9dc432f33688e91ba26f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:06.663662 systemd[1]: run-netns-cni\x2dc31064d2\x2dc27c\x2dc0e2\x2d65b4\x2d9fe4ce29829c.mount: Deactivated successfully. May 27 03:24:06.664852 containerd[1990]: time="2025-05-27T03:24:06.664796947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-27b29,Uid:5b0dc86a-7a64-45f7-952b-6e3978d12edf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7206212ef5ac393f8a77965d645ccdae9a1ae057a9dc432f33688e91ba26f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:06.665739 kubelet[3234]: E0527 03:24:06.665692 3234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7206212ef5ac393f8a77965d645ccdae9a1ae057a9dc432f33688e91ba26f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:24:06.666075 kubelet[3234]: E0527 03:24:06.665752 3234 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7206212ef5ac393f8a77965d645ccdae9a1ae057a9dc432f33688e91ba26f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-27b29" May 27 03:24:06.666075 kubelet[3234]: E0527 03:24:06.665777 3234 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7206212ef5ac393f8a77965d645ccdae9a1ae057a9dc432f33688e91ba26f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-27b29" May 27 03:24:06.666075 kubelet[3234]: E0527 03:24:06.665854 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-27b29_calico-system(5b0dc86a-7a64-45f7-952b-6e3978d12edf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-27b29_calico-system(5b0dc86a-7a64-45f7-952b-6e3978d12edf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7206212ef5ac393f8a77965d645ccdae9a1ae057a9dc432f33688e91ba26f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-27b29" podUID="5b0dc86a-7a64-45f7-952b-6e3978d12edf" May 27 03:24:11.796354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3496427133.mount: Deactivated successfully. May 27 03:24:11.874006 containerd[1990]: time="2025-05-27T03:24:11.865846323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:11.891954 containerd[1990]: time="2025-05-27T03:24:11.891899578Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:11.893983 containerd[1990]: time="2025-05-27T03:24:11.893543434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:24:11.894603 containerd[1990]: time="2025-05-27T03:24:11.894581008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:11.896647 containerd[1990]: time="2025-05-27T03:24:11.896613884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.12960904s" May 27 03:24:11.896713 containerd[1990]: time="2025-05-27T03:24:11.896655400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:24:11.942512 containerd[1990]: time="2025-05-27T03:24:11.942455693Z" level=info msg="CreateContainer within sandbox \"85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:24:11.980353 containerd[1990]: time="2025-05-27T03:24:11.978170247Z" level=info msg="Container 3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:11.980030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313965091.mount: Deactivated successfully. May 27 03:24:12.050312 containerd[1990]: time="2025-05-27T03:24:12.050164977Z" level=info msg="CreateContainer within sandbox \"85f52f7c0db5439b6cd81cc6867be6aacc662766378ef4bce842c41f77733d96\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\"" May 27 03:24:12.051678 containerd[1990]: time="2025-05-27T03:24:12.051612176Z" level=info msg="StartContainer for \"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\"" May 27 03:24:12.057873 containerd[1990]: time="2025-05-27T03:24:12.057824799Z" level=info msg="connecting to shim 3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8" address="unix:///run/containerd/s/70c6558698ef570a8ef2f4613d6ff3ff01da250d7693bed61c8d832a7b19f5c9" protocol=ttrpc version=3 May 27 03:24:12.140653 systemd[1]: Started cri-containerd-3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8.scope - libcontainer container 3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8. May 27 03:24:12.205369 containerd[1990]: time="2025-05-27T03:24:12.205332543Z" level=info msg="StartContainer for \"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\" returns successfully" May 27 03:24:12.572582 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:24:12.574443 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:24:12.973789 kubelet[3234]: I0527 03:24:12.972861 3234 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm8wd\" (UniqueName: \"kubernetes.io/projected/8c32713b-5a33-4442-a018-326711c1be7b-kube-api-access-qm8wd\") pod \"8c32713b-5a33-4442-a018-326711c1be7b\" (UID: \"8c32713b-5a33-4442-a018-326711c1be7b\") " May 27 03:24:12.973789 kubelet[3234]: I0527 03:24:12.972985 3234 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c32713b-5a33-4442-a018-326711c1be7b-whisker-ca-bundle\") pod \"8c32713b-5a33-4442-a018-326711c1be7b\" (UID: \"8c32713b-5a33-4442-a018-326711c1be7b\") " May 27 03:24:12.973789 kubelet[3234]: I0527 03:24:12.973015 3234 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c32713b-5a33-4442-a018-326711c1be7b-whisker-backend-key-pair\") pod \"8c32713b-5a33-4442-a018-326711c1be7b\" (UID: \"8c32713b-5a33-4442-a018-326711c1be7b\") " May 27 03:24:12.997254 systemd[1]: var-lib-kubelet-pods-8c32713b\x2d5a33\x2d4442\x2da018\x2d326711c1be7b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqm8wd.mount: Deactivated successfully. May 27 03:24:12.997861 kubelet[3234]: I0527 03:24:12.988099 3234 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c32713b-5a33-4442-a018-326711c1be7b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8c32713b-5a33-4442-a018-326711c1be7b" (UID: "8c32713b-5a33-4442-a018-326711c1be7b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:24:13.000549 kubelet[3234]: I0527 03:24:13.000475 3234 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c32713b-5a33-4442-a018-326711c1be7b-kube-api-access-qm8wd" (OuterVolumeSpecName: "kube-api-access-qm8wd") pod "8c32713b-5a33-4442-a018-326711c1be7b" (UID: "8c32713b-5a33-4442-a018-326711c1be7b"). InnerVolumeSpecName "kube-api-access-qm8wd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:24:13.007057 systemd[1]: var-lib-kubelet-pods-8c32713b\x2d5a33\x2d4442\x2da018\x2d326711c1be7b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:24:13.007576 kubelet[3234]: I0527 03:24:13.007532 3234 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c32713b-5a33-4442-a018-326711c1be7b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8c32713b-5a33-4442-a018-326711c1be7b" (UID: "8c32713b-5a33-4442-a018-326711c1be7b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:24:13.075967 kubelet[3234]: I0527 03:24:13.075864 3234 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c32713b-5a33-4442-a018-326711c1be7b-whisker-ca-bundle\") on node \"ip-172-31-28-64\" DevicePath \"\"" May 27 03:24:13.075967 kubelet[3234]: I0527 03:24:13.075921 3234 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c32713b-5a33-4442-a018-326711c1be7b-whisker-backend-key-pair\") on node \"ip-172-31-28-64\" DevicePath \"\"" May 27 03:24:13.075967 kubelet[3234]: I0527 03:24:13.075937 3234 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qm8wd\" (UniqueName: \"kubernetes.io/projected/8c32713b-5a33-4442-a018-326711c1be7b-kube-api-access-qm8wd\") on node \"ip-172-31-28-64\" DevicePath \"\"" May 27 03:24:13.312216 containerd[1990]: time="2025-05-27T03:24:13.312073911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\" id:\"307da95fe59580fd4c279a1b16c51d11e85c501504ef1dbc08ff2f80a86d7533\" pid:4581 exit_status:1 exited_at:{seconds:1748316253 nanos:301888365}" May 27 03:24:13.917868 systemd[1]: Removed slice kubepods-besteffort-pod8c32713b_5a33_4442_a018_326711c1be7b.slice - libcontainer container kubepods-besteffort-pod8c32713b_5a33_4442_a018_326711c1be7b.slice. May 27 03:24:13.985202 kubelet[3234]: I0527 03:24:13.943006 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z6qpw" podStartSLOduration=2.715357792 podStartE2EDuration="17.942875661s" podCreationTimestamp="2025-05-27 03:23:56 +0000 UTC" firstStartedPulling="2025-05-27 03:23:56.669749952 +0000 UTC m=+20.265403783" lastFinishedPulling="2025-05-27 03:24:11.897267832 +0000 UTC m=+35.492921652" observedRunningTime="2025-05-27 03:24:12.91673087 +0000 UTC m=+36.512384713" watchObservedRunningTime="2025-05-27 03:24:13.942875661 +0000 UTC m=+37.538529500" May 27 03:24:14.054046 systemd[1]: Created slice kubepods-besteffort-pode8dfedbc_afe6_4b30_b214_b9a2e156015d.slice - libcontainer container kubepods-besteffort-pode8dfedbc_afe6_4b30_b214_b9a2e156015d.slice. May 27 03:24:14.200360 kubelet[3234]: I0527 03:24:14.198536 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8dfedbc-afe6-4b30-b214-b9a2e156015d-whisker-ca-bundle\") pod \"whisker-787bbfb675-hlbrh\" (UID: \"e8dfedbc-afe6-4b30-b214-b9a2e156015d\") " pod="calico-system/whisker-787bbfb675-hlbrh" May 27 03:24:14.200360 kubelet[3234]: I0527 03:24:14.198595 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw88x\" (UniqueName: \"kubernetes.io/projected/e8dfedbc-afe6-4b30-b214-b9a2e156015d-kube-api-access-mw88x\") pod \"whisker-787bbfb675-hlbrh\" (UID: \"e8dfedbc-afe6-4b30-b214-b9a2e156015d\") " pod="calico-system/whisker-787bbfb675-hlbrh" May 27 03:24:14.200360 kubelet[3234]: I0527 03:24:14.198672 3234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8dfedbc-afe6-4b30-b214-b9a2e156015d-whisker-backend-key-pair\") pod \"whisker-787bbfb675-hlbrh\" (UID: \"e8dfedbc-afe6-4b30-b214-b9a2e156015d\") " pod="calico-system/whisker-787bbfb675-hlbrh" May 27 03:24:14.230229 containerd[1990]: time="2025-05-27T03:24:14.230192204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\" id:\"fc9c137347b1fee67ac614e2c4c80548ceac19348e105141c316bf1ed1caad96\" pid:4613 exit_status:1 exited_at:{seconds:1748316254 nanos:229640964}" May 27 03:24:14.362316 containerd[1990]: time="2025-05-27T03:24:14.362275421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787bbfb675-hlbrh,Uid:e8dfedbc-afe6-4b30-b214-b9a2e156015d,Namespace:calico-system,Attempt:0,}" May 27 03:24:14.584542 kubelet[3234]: I0527 03:24:14.583590 3234 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c32713b-5a33-4442-a018-326711c1be7b" path="/var/lib/kubelet/pods/8c32713b-5a33-4442-a018-326711c1be7b/volumes" May 27 03:24:15.096350 containerd[1990]: time="2025-05-27T03:24:15.096160372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\" id:\"351aab8612480add8179ce5097279c6cf396d027a707fda37b5b7a9af900adf2\" pid:4774 exit_status:1 exited_at:{seconds:1748316255 nanos:95833279}" May 27 03:24:15.111476 (udev-worker)[4546]: Network interface NamePolicy= disabled on kernel command line. May 27 03:24:15.120042 systemd-networkd[1806]: cali861227efceb: Link UP May 27 03:24:15.120292 systemd-networkd[1806]: cali861227efceb: Gained carrier May 27 03:24:15.154257 containerd[1990]: 2025-05-27 03:24:14.485 [INFO][4711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:24:15.154257 containerd[1990]: 2025-05-27 03:24:14.559 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0 whisker-787bbfb675- calico-system e8dfedbc-afe6-4b30-b214-b9a2e156015d 879 0 2025-05-27 03:24:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:787bbfb675 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-64 whisker-787bbfb675-hlbrh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali861227efceb [] [] }} ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-" May 27 03:24:15.154257 containerd[1990]: 2025-05-27 03:24:14.560 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" May 27 03:24:15.154257 containerd[1990]: 2025-05-27 03:24:14.992 [INFO][4723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" HandleID="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Workload="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:14.995 [INFO][4723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" HandleID="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Workload="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f770), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-64", "pod":"whisker-787bbfb675-hlbrh", "timestamp":"2025-05-27 03:24:14.992859454 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:14.995 [INFO][4723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:14.996 [INFO][4723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:14.996 [INFO][4723] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:15.032 [INFO][4723] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" host="ip-172-31-28-64" May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:15.053 [INFO][4723] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:15.061 [INFO][4723] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:15.065 [INFO][4723] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:15.154613 containerd[1990]: 2025-05-27 03:24:15.068 [INFO][4723] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:15.157831 containerd[1990]: 2025-05-27 03:24:15.068 [INFO][4723] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" host="ip-172-31-28-64" May 27 03:24:15.157831 containerd[1990]: 2025-05-27 03:24:15.070 [INFO][4723] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9 May 27 03:24:15.157831 containerd[1990]: 2025-05-27 03:24:15.077 [INFO][4723] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" host="ip-172-31-28-64" May 27 03:24:15.157831 containerd[1990]: 2025-05-27 03:24:15.085 [INFO][4723] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.129/26] block=192.168.19.128/26 handle="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" host="ip-172-31-28-64" May 27 03:24:15.157831 containerd[1990]: 2025-05-27 03:24:15.086 [INFO][4723] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.129/26] handle="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" host="ip-172-31-28-64" May 27 03:24:15.157831 containerd[1990]: 2025-05-27 03:24:15.086 [INFO][4723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:15.157831 containerd[1990]: 2025-05-27 03:24:15.086 [INFO][4723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.129/26] IPv6=[] ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" HandleID="k8s-pod-network.ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Workload="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" May 27 03:24:15.158119 containerd[1990]: 2025-05-27 03:24:15.092 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0", GenerateName:"whisker-787bbfb675-", Namespace:"calico-system", SelfLink:"", UID:"e8dfedbc-afe6-4b30-b214-b9a2e156015d", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787bbfb675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"whisker-787bbfb675-hlbrh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali861227efceb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:15.158119 containerd[1990]: 2025-05-27 03:24:15.092 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.129/32] ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" May 27 03:24:15.158270 containerd[1990]: 2025-05-27 03:24:15.092 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali861227efceb ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" May 27 03:24:15.158270 containerd[1990]: 2025-05-27 03:24:15.117 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" May 27 03:24:15.158345 containerd[1990]: 2025-05-27 03:24:15.118 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0", GenerateName:"whisker-787bbfb675-", Namespace:"calico-system", SelfLink:"", UID:"e8dfedbc-afe6-4b30-b214-b9a2e156015d", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787bbfb675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9", Pod:"whisker-787bbfb675-hlbrh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali861227efceb", MAC:"4a:e8:18:b2:0c:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:15.158439 containerd[1990]: 2025-05-27 03:24:15.147 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" Namespace="calico-system" Pod="whisker-787bbfb675-hlbrh" WorkloadEndpoint="ip--172--31--28--64-k8s-whisker--787bbfb675--hlbrh-eth0" May 27 03:24:15.448821 systemd-networkd[1806]: vxlan.calico: Link UP May 27 03:24:15.448833 systemd-networkd[1806]: vxlan.calico: Gained carrier May 27 03:24:15.499791 containerd[1990]: time="2025-05-27T03:24:15.499740746Z" level=info msg="connecting to shim ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9" address="unix:///run/containerd/s/2bb2b0237c903c3b77ac140474cb965a41f56e7aea2a4cf8b4cbcc94869b7ab8" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:15.523624 (udev-worker)[4550]: Network interface NamePolicy= disabled on kernel command line. May 27 03:24:15.545822 systemd[1]: Started cri-containerd-ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9.scope - libcontainer container ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9. May 27 03:24:15.657510 containerd[1990]: time="2025-05-27T03:24:15.656734586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787bbfb675-hlbrh,Uid:e8dfedbc-afe6-4b30-b214-b9a2e156015d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce8d255c8dc7d92c788c11c4c802c72ad1b41f65a1d4e07381a26d89cc9e08e9\"" May 27 03:24:15.697304 containerd[1990]: time="2025-05-27T03:24:15.697240517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:15.897303 containerd[1990]: time="2025-05-27T03:24:15.897156315Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:15.899586 containerd[1990]: time="2025-05-27T03:24:15.899319405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:15.899796 containerd[1990]: time="2025-05-27T03:24:15.899700696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:15.907394 kubelet[3234]: E0527 03:24:15.904019 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:15.907394 kubelet[3234]: E0527 03:24:15.906980 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:15.935790 kubelet[3234]: E0527 03:24:15.935707 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a52bb5dcaf854b368beda8a6c3b5e697,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:15.939111 containerd[1990]: time="2025-05-27T03:24:15.938984219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:16.109291 containerd[1990]: time="2025-05-27T03:24:16.109222777Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:16.111494 containerd[1990]: time="2025-05-27T03:24:16.111432243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:16.111639 containerd[1990]: time="2025-05-27T03:24:16.111439386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:16.111804 kubelet[3234]: E0527 03:24:16.111747 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:16.111804 kubelet[3234]: E0527 03:24:16.111793 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:16.111959 kubelet[3234]: E0527 03:24:16.111905 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:16.113342 kubelet[3234]: E0527 03:24:16.113287 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:24:16.264749 systemd-networkd[1806]: cali861227efceb: Gained IPv6LL May 27 03:24:16.564905 containerd[1990]: time="2025-05-27T03:24:16.564465802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-phh6k,Uid:466fd2f4-81f8-4a10-914e-b412ae83f7aa,Namespace:kube-system,Attempt:0,}" May 27 03:24:16.686140 systemd-networkd[1806]: cali6d40706dd92: Link UP May 27 03:24:16.686757 systemd-networkd[1806]: cali6d40706dd92: Gained carrier May 27 03:24:16.707174 containerd[1990]: 2025-05-27 03:24:16.605 [INFO][4915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0 coredns-674b8bbfcf- kube-system 466fd2f4-81f8-4a10-914e-b412ae83f7aa 808 0 2025-05-27 03:23:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-64 coredns-674b8bbfcf-phh6k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6d40706dd92 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-" May 27 03:24:16.707174 containerd[1990]: 2025-05-27 03:24:16.605 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" May 27 03:24:16.707174 containerd[1990]: 2025-05-27 03:24:16.639 [INFO][4926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" HandleID="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Workload="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.639 [INFO][4926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" HandleID="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Workload="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235020), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-64", "pod":"coredns-674b8bbfcf-phh6k", "timestamp":"2025-05-27 03:24:16.639430811 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.639 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.639 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.639 [INFO][4926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.649 [INFO][4926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" host="ip-172-31-28-64" May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.655 [INFO][4926] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.661 [INFO][4926] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.664 [INFO][4926] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.666 [INFO][4926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:16.707426 containerd[1990]: 2025-05-27 03:24:16.666 [INFO][4926] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" host="ip-172-31-28-64" May 27 03:24:16.708428 containerd[1990]: 2025-05-27 03:24:16.667 [INFO][4926] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376 May 27 03:24:16.708428 containerd[1990]: 2025-05-27 03:24:16.671 [INFO][4926] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" host="ip-172-31-28-64" May 27 03:24:16.708428 containerd[1990]: 2025-05-27 03:24:16.679 [INFO][4926] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.130/26] block=192.168.19.128/26 handle="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" host="ip-172-31-28-64" May 27 03:24:16.708428 containerd[1990]: 2025-05-27 03:24:16.679 [INFO][4926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.130/26] handle="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" host="ip-172-31-28-64" May 27 03:24:16.708428 containerd[1990]: 2025-05-27 03:24:16.679 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:16.708428 containerd[1990]: 2025-05-27 03:24:16.679 [INFO][4926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.130/26] IPv6=[] ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" HandleID="k8s-pod-network.4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Workload="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" May 27 03:24:16.708693 containerd[1990]: 2025-05-27 03:24:16.682 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"466fd2f4-81f8-4a10-914e-b412ae83f7aa", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"coredns-674b8bbfcf-phh6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d40706dd92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:16.708693 containerd[1990]: 2025-05-27 03:24:16.683 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.130/32] ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" May 27 03:24:16.708693 containerd[1990]: 2025-05-27 03:24:16.683 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d40706dd92 ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" May 27 03:24:16.708693 containerd[1990]: 2025-05-27 03:24:16.687 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" May 27 03:24:16.708693 containerd[1990]: 2025-05-27 03:24:16.689 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"466fd2f4-81f8-4a10-914e-b412ae83f7aa", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376", Pod:"coredns-674b8bbfcf-phh6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d40706dd92", MAC:"16:a2:58:86:0a:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:16.708693 containerd[1990]: 2025-05-27 03:24:16.704 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" Namespace="kube-system" Pod="coredns-674b8bbfcf-phh6k" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--phh6k-eth0" May 27 03:24:16.755505 containerd[1990]: time="2025-05-27T03:24:16.755129977Z" level=info msg="connecting to shim 4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376" address="unix:///run/containerd/s/cd9254132889f37c905f17adfc46a538091733b15a66e6ee19c3fbd402c0800d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:16.778768 systemd[1]: Started cri-containerd-4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376.scope - libcontainer container 4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376. May 27 03:24:16.841982 containerd[1990]: time="2025-05-27T03:24:16.841935133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-phh6k,Uid:466fd2f4-81f8-4a10-914e-b412ae83f7aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376\"" May 27 03:24:16.855165 containerd[1990]: time="2025-05-27T03:24:16.855133042Z" level=info msg="CreateContainer within sandbox \"4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:16.889227 containerd[1990]: time="2025-05-27T03:24:16.889197187Z" level=info msg="Container 4c239a7d8d6b9eaca2b8ed4f3ccf05954117fb010a387e4d7e4b206bb8b7dfa4: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:16.894240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468148267.mount: Deactivated successfully. May 27 03:24:16.900035 kubelet[3234]: E0527 03:24:16.899995 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:24:16.903672 containerd[1990]: time="2025-05-27T03:24:16.903593099Z" level=info msg="CreateContainer within sandbox \"4b7f45bff4e756ae39b557d21e9b6b644aa53446f3ca49235fbf6addfe6fe376\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c239a7d8d6b9eaca2b8ed4f3ccf05954117fb010a387e4d7e4b206bb8b7dfa4\"" May 27 03:24:16.904775 containerd[1990]: time="2025-05-27T03:24:16.904743462Z" level=info msg="StartContainer for \"4c239a7d8d6b9eaca2b8ed4f3ccf05954117fb010a387e4d7e4b206bb8b7dfa4\"" May 27 03:24:16.907253 containerd[1990]: time="2025-05-27T03:24:16.907216822Z" level=info msg="connecting to shim 4c239a7d8d6b9eaca2b8ed4f3ccf05954117fb010a387e4d7e4b206bb8b7dfa4" address="unix:///run/containerd/s/cd9254132889f37c905f17adfc46a538091733b15a66e6ee19c3fbd402c0800d" protocol=ttrpc version=3 May 27 03:24:16.937706 systemd[1]: Started cri-containerd-4c239a7d8d6b9eaca2b8ed4f3ccf05954117fb010a387e4d7e4b206bb8b7dfa4.scope - libcontainer container 4c239a7d8d6b9eaca2b8ed4f3ccf05954117fb010a387e4d7e4b206bb8b7dfa4. May 27 03:24:16.981914 containerd[1990]: time="2025-05-27T03:24:16.981878338Z" level=info msg="StartContainer for \"4c239a7d8d6b9eaca2b8ed4f3ccf05954117fb010a387e4d7e4b206bb8b7dfa4\" returns successfully" May 27 03:24:17.096795 systemd-networkd[1806]: vxlan.calico: Gained IPv6LL May 27 03:24:17.561912 containerd[1990]: time="2025-05-27T03:24:17.561804292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-bfrlq,Uid:56ee3bee-606e-4fbc-9ef0-e5652e2baa12,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:17.562205 containerd[1990]: time="2025-05-27T03:24:17.561840565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744b9f4d8b-xp4nf,Uid:54b9b148-d786-4df2-a9e7-5d0369298029,Namespace:calico-system,Attempt:0,}" May 27 03:24:17.571575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134032436.mount: Deactivated successfully. May 27 03:24:17.737751 systemd-networkd[1806]: calied0070f564c: Link UP May 27 03:24:17.738355 systemd-networkd[1806]: calied0070f564c: Gained carrier May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.635 [INFO][5022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0 calico-apiserver-699c564668- calico-apiserver 56ee3bee-606e-4fbc-9ef0-e5652e2baa12 807 0 2025-05-27 03:23:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699c564668 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-64 calico-apiserver-699c564668-bfrlq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied0070f564c [] [] }} ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.636 [INFO][5022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.691 [INFO][5048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" HandleID="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Workload="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.692 [INFO][5048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" HandleID="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Workload="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004bdd30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-64", "pod":"calico-apiserver-699c564668-bfrlq", "timestamp":"2025-05-27 03:24:17.691901091 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.692 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.692 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.692 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.702 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.707 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.712 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.714 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.716 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.716 [INFO][5048] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.718 [INFO][5048] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.724 [INFO][5048] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.729 [INFO][5048] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.131/26] block=192.168.19.128/26 handle="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.729 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.131/26] handle="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" host="ip-172-31-28-64" May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.730 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:17.762693 containerd[1990]: 2025-05-27 03:24:17.730 [INFO][5048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.131/26] IPv6=[] ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" HandleID="k8s-pod-network.296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Workload="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" May 27 03:24:17.764924 containerd[1990]: 2025-05-27 03:24:17.733 [INFO][5022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0", GenerateName:"calico-apiserver-699c564668-", Namespace:"calico-apiserver", SelfLink:"", UID:"56ee3bee-606e-4fbc-9ef0-e5652e2baa12", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c564668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"calico-apiserver-699c564668-bfrlq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0070f564c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:17.764924 containerd[1990]: 2025-05-27 03:24:17.733 [INFO][5022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.131/32] ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" May 27 03:24:17.764924 containerd[1990]: 2025-05-27 03:24:17.733 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied0070f564c ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" May 27 03:24:17.764924 containerd[1990]: 2025-05-27 03:24:17.738 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" May 27 03:24:17.764924 containerd[1990]: 2025-05-27 03:24:17.739 [INFO][5022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0", GenerateName:"calico-apiserver-699c564668-", Namespace:"calico-apiserver", SelfLink:"", UID:"56ee3bee-606e-4fbc-9ef0-e5652e2baa12", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c564668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d", Pod:"calico-apiserver-699c564668-bfrlq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0070f564c", MAC:"02:fe:07:45:23:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:17.764924 containerd[1990]: 2025-05-27 03:24:17.753 [INFO][5022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-bfrlq" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--bfrlq-eth0" May 27 03:24:17.819073 containerd[1990]: time="2025-05-27T03:24:17.818314518Z" level=info msg="connecting to shim 296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d" address="unix:///run/containerd/s/72c14df223f3d8abc1e8e762688aaa09cd357558549481760bd02ddb70b26ae1" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:17.859750 systemd[1]: Started cri-containerd-296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d.scope - libcontainer container 296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d. May 27 03:24:17.894210 systemd-networkd[1806]: cali985ec00e8a0: Link UP May 27 03:24:17.897613 systemd-networkd[1806]: cali985ec00e8a0: Gained carrier May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.630 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0 calico-kube-controllers-744b9f4d8b- calico-system 54b9b148-d786-4df2-a9e7-5d0369298029 810 0 2025-05-27 03:23:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:744b9f4d8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-64 calico-kube-controllers-744b9f4d8b-xp4nf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali985ec00e8a0 [] [] }} ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.631 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.697 [INFO][5046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" HandleID="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Workload="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.698 [INFO][5046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" HandleID="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Workload="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010fb70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-64", "pod":"calico-kube-controllers-744b9f4d8b-xp4nf", "timestamp":"2025-05-27 03:24:17.697221143 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.699 [INFO][5046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.729 [INFO][5046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.729 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.806 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.816 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.841 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.851 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.861 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.861 [INFO][5046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.864 [INFO][5046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.871 [INFO][5046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.884 [INFO][5046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.132/26] block=192.168.19.128/26 handle="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.885 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.132/26] handle="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" host="ip-172-31-28-64" May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.885 [INFO][5046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:17.924685 containerd[1990]: 2025-05-27 03:24:17.885 [INFO][5046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.132/26] IPv6=[] ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" HandleID="k8s-pod-network.83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Workload="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" May 27 03:24:17.925713 containerd[1990]: 2025-05-27 03:24:17.890 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0", GenerateName:"calico-kube-controllers-744b9f4d8b-", Namespace:"calico-system", SelfLink:"", UID:"54b9b148-d786-4df2-a9e7-5d0369298029", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744b9f4d8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"calico-kube-controllers-744b9f4d8b-xp4nf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali985ec00e8a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:17.925713 containerd[1990]: 2025-05-27 03:24:17.890 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.132/32] ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" May 27 03:24:17.925713 containerd[1990]: 2025-05-27 03:24:17.891 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali985ec00e8a0 ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" May 27 03:24:17.925713 containerd[1990]: 2025-05-27 03:24:17.894 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" May 27 03:24:17.925713 containerd[1990]: 2025-05-27 03:24:17.894 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0", GenerateName:"calico-kube-controllers-744b9f4d8b-", Namespace:"calico-system", SelfLink:"", UID:"54b9b148-d786-4df2-a9e7-5d0369298029", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"744b9f4d8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc", Pod:"calico-kube-controllers-744b9f4d8b-xp4nf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali985ec00e8a0", MAC:"8e:66:10:5c:ae:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:17.925713 containerd[1990]: 2025-05-27 03:24:17.918 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" Namespace="calico-system" Pod="calico-kube-controllers-744b9f4d8b-xp4nf" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--kube--controllers--744b9f4d8b--xp4nf-eth0" May 27 03:24:17.939049 kubelet[3234]: I0527 03:24:17.938965 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-phh6k" podStartSLOduration=35.938940977 podStartE2EDuration="35.938940977s" podCreationTimestamp="2025-05-27 03:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:17.937241568 +0000 UTC m=+41.532895410" watchObservedRunningTime="2025-05-27 03:24:17.938940977 +0000 UTC m=+41.534594819" May 27 03:24:17.990455 containerd[1990]: time="2025-05-27T03:24:17.989857454Z" level=info msg="connecting to shim 83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc" address="unix:///run/containerd/s/14ee11f5ad8fa29b30d9f496b8126529996ebd4370cf78c81535105dc8e3792c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:18.048694 systemd[1]: Started cri-containerd-83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc.scope - libcontainer container 83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc. May 27 03:24:18.108468 containerd[1990]: time="2025-05-27T03:24:18.108419995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-bfrlq,Uid:56ee3bee-606e-4fbc-9ef0-e5652e2baa12,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d\"" May 27 03:24:18.110949 containerd[1990]: time="2025-05-27T03:24:18.110832846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:18.169591 containerd[1990]: time="2025-05-27T03:24:18.169550600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-744b9f4d8b-xp4nf,Uid:54b9b148-d786-4df2-a9e7-5d0369298029,Namespace:calico-system,Attempt:0,} returns sandbox id \"83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc\"" May 27 03:24:18.376829 systemd-networkd[1806]: cali6d40706dd92: Gained IPv6LL May 27 03:24:18.572606 containerd[1990]: time="2025-05-27T03:24:18.572556616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q2bwz,Uid:47be3583-df5e-4c3c-9ca6-24fbbdb1d253,Namespace:kube-system,Attempt:0,}" May 27 03:24:18.699138 systemd-networkd[1806]: cali399b175caea: Link UP May 27 03:24:18.700291 systemd-networkd[1806]: cali399b175caea: Gained carrier May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.621 [INFO][5170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0 coredns-674b8bbfcf- kube-system 47be3583-df5e-4c3c-9ca6-24fbbdb1d253 812 0 2025-05-27 03:23:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-64 coredns-674b8bbfcf-q2bwz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali399b175caea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.622 [INFO][5170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.657 [INFO][5183] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" HandleID="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Workload="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.657 [INFO][5183] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" HandleID="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Workload="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002352a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-64", "pod":"coredns-674b8bbfcf-q2bwz", "timestamp":"2025-05-27 03:24:18.657190425 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.657 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.657 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.657 [INFO][5183] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.665 [INFO][5183] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.671 [INFO][5183] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.675 [INFO][5183] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.677 [INFO][5183] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.680 [INFO][5183] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.680 [INFO][5183] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.681 [INFO][5183] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3 May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.685 [INFO][5183] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.693 [INFO][5183] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.133/26] block=192.168.19.128/26 handle="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.693 [INFO][5183] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.133/26] handle="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" host="ip-172-31-28-64" May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.693 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:18.716115 containerd[1990]: 2025-05-27 03:24:18.694 [INFO][5183] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.133/26] IPv6=[] ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" HandleID="k8s-pod-network.a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Workload="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" May 27 03:24:18.716788 containerd[1990]: 2025-05-27 03:24:18.696 [INFO][5170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"47be3583-df5e-4c3c-9ca6-24fbbdb1d253", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"coredns-674b8bbfcf-q2bwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali399b175caea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:18.716788 containerd[1990]: 2025-05-27 03:24:18.696 [INFO][5170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.133/32] ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" May 27 03:24:18.716788 containerd[1990]: 2025-05-27 03:24:18.696 [INFO][5170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali399b175caea ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" May 27 03:24:18.716788 containerd[1990]: 2025-05-27 03:24:18.700 [INFO][5170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" May 27 03:24:18.716788 containerd[1990]: 2025-05-27 03:24:18.701 [INFO][5170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"47be3583-df5e-4c3c-9ca6-24fbbdb1d253", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3", Pod:"coredns-674b8bbfcf-q2bwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali399b175caea", MAC:"a2:a4:8d:56:31:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:18.716788 containerd[1990]: 2025-05-27 03:24:18.711 [INFO][5170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" Namespace="kube-system" Pod="coredns-674b8bbfcf-q2bwz" WorkloadEndpoint="ip--172--31--28--64-k8s-coredns--674b8bbfcf--q2bwz-eth0" May 27 03:24:18.753940 containerd[1990]: time="2025-05-27T03:24:18.753863673Z" level=info msg="connecting to shim a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3" address="unix:///run/containerd/s/779888a85aa7e1c78f7bb1a2471933e9c108e8f268e9e940670ea6ae61027731" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:18.785717 systemd[1]: Started cri-containerd-a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3.scope - libcontainer container a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3. May 27 03:24:18.840443 containerd[1990]: time="2025-05-27T03:24:18.840320887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q2bwz,Uid:47be3583-df5e-4c3c-9ca6-24fbbdb1d253,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3\"" May 27 03:24:18.848160 containerd[1990]: time="2025-05-27T03:24:18.848120468Z" level=info msg="CreateContainer within sandbox \"a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:18.863889 containerd[1990]: time="2025-05-27T03:24:18.863661464Z" level=info msg="Container 50993518b66f3df2b4c2d962e06fbc0bbef2313c77da8357f231de90fa5112fe: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:18.875994 containerd[1990]: time="2025-05-27T03:24:18.875948184Z" level=info msg="CreateContainer within sandbox \"a7e9674530f96e41b9e5a6e2f1aa2d52bc69341c395abca13b464ce61ef4a3f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50993518b66f3df2b4c2d962e06fbc0bbef2313c77da8357f231de90fa5112fe\"" May 27 03:24:18.877537 containerd[1990]: time="2025-05-27T03:24:18.876585656Z" level=info msg="StartContainer for \"50993518b66f3df2b4c2d962e06fbc0bbef2313c77da8357f231de90fa5112fe\"" May 27 03:24:18.877987 containerd[1990]: time="2025-05-27T03:24:18.877957988Z" level=info msg="connecting to shim 50993518b66f3df2b4c2d962e06fbc0bbef2313c77da8357f231de90fa5112fe" address="unix:///run/containerd/s/779888a85aa7e1c78f7bb1a2471933e9c108e8f268e9e940670ea6ae61027731" protocol=ttrpc version=3 May 27 03:24:18.898733 systemd[1]: Started cri-containerd-50993518b66f3df2b4c2d962e06fbc0bbef2313c77da8357f231de90fa5112fe.scope - libcontainer container 50993518b66f3df2b4c2d962e06fbc0bbef2313c77da8357f231de90fa5112fe. May 27 03:24:18.939755 containerd[1990]: time="2025-05-27T03:24:18.939721778Z" level=info msg="StartContainer for \"50993518b66f3df2b4c2d962e06fbc0bbef2313c77da8357f231de90fa5112fe\" returns successfully" May 27 03:24:19.400872 systemd-networkd[1806]: calied0070f564c: Gained IPv6LL May 27 03:24:19.912751 systemd-networkd[1806]: cali985ec00e8a0: Gained IPv6LL May 27 03:24:19.985208 kubelet[3234]: I0527 03:24:19.984547 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q2bwz" podStartSLOduration=37.984524827 podStartE2EDuration="37.984524827s" podCreationTimestamp="2025-05-27 03:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:19.955025614 +0000 UTC m=+43.550679467" watchObservedRunningTime="2025-05-27 03:24:19.984524827 +0000 UTC m=+43.580178669" May 27 03:24:20.488714 systemd-networkd[1806]: cali399b175caea: Gained IPv6LL May 27 03:24:20.563160 containerd[1990]: time="2025-05-27T03:24:20.563119209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-q4mpt,Uid:d81b5d20-78b9-4ebb-9eb4-dc51405b66e7,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:20.565025 containerd[1990]: time="2025-05-27T03:24:20.564987387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ghcj4,Uid:2b70e4e3-90ce-46ab-b026-f5df1c2548db,Namespace:calico-system,Attempt:0,}" May 27 03:24:20.567365 containerd[1990]: time="2025-05-27T03:24:20.567314975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-27b29,Uid:5b0dc86a-7a64-45f7-952b-6e3978d12edf,Namespace:calico-system,Attempt:0,}" May 27 03:24:20.947147 systemd-networkd[1806]: cali6e1468ff86b: Link UP May 27 03:24:20.951242 systemd-networkd[1806]: cali6e1468ff86b: Gained carrier May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.718 [INFO][5288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0 csi-node-driver- calico-system 5b0dc86a-7a64-45f7-952b-6e3978d12edf 701 0 2025-05-27 03:23:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-64 csi-node-driver-27b29 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6e1468ff86b [] [] }} ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.719 [INFO][5288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.842 [INFO][5325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" HandleID="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Workload="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.843 [INFO][5325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" HandleID="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Workload="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327540), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-64", "pod":"csi-node-driver-27b29", "timestamp":"2025-05-27 03:24:20.842505609 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.843 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.843 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.843 [INFO][5325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.865 [INFO][5325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.874 [INFO][5325] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.885 [INFO][5325] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.889 [INFO][5325] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.893 [INFO][5325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.893 [INFO][5325] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.896 [INFO][5325] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885 May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.902 [INFO][5325] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.920 [INFO][5325] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.134/26] block=192.168.19.128/26 handle="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.921 [INFO][5325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.134/26] handle="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" host="ip-172-31-28-64" May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.921 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:21.010632 containerd[1990]: 2025-05-27 03:24:20.921 [INFO][5325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.134/26] IPv6=[] ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" HandleID="k8s-pod-network.3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Workload="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" May 27 03:24:21.012230 containerd[1990]: 2025-05-27 03:24:20.938 [INFO][5288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b0dc86a-7a64-45f7-952b-6e3978d12edf", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"csi-node-driver-27b29", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e1468ff86b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:21.012230 containerd[1990]: 2025-05-27 03:24:20.939 [INFO][5288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.134/32] ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" May 27 03:24:21.012230 containerd[1990]: 2025-05-27 03:24:20.939 [INFO][5288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e1468ff86b ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" May 27 03:24:21.012230 containerd[1990]: 2025-05-27 03:24:20.956 [INFO][5288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" May 27 03:24:21.012230 containerd[1990]: 2025-05-27 03:24:20.957 [INFO][5288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b0dc86a-7a64-45f7-952b-6e3978d12edf", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885", Pod:"csi-node-driver-27b29", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e1468ff86b", MAC:"62:db:e1:01:06:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:21.012230 containerd[1990]: 2025-05-27 03:24:20.998 [INFO][5288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" Namespace="calico-system" Pod="csi-node-driver-27b29" WorkloadEndpoint="ip--172--31--28--64-k8s-csi--node--driver--27b29-eth0" May 27 03:24:21.101319 systemd-networkd[1806]: cali01e756a1ef0: Link UP May 27 03:24:21.102364 systemd-networkd[1806]: cali01e756a1ef0: Gained carrier May 27 03:24:21.149274 containerd[1990]: time="2025-05-27T03:24:21.149223555Z" level=info msg="connecting to shim 3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885" address="unix:///run/containerd/s/104fe3602319298c13b2b2da3ea41106043cb85dc6756c81eb7f9559ebc78f8f" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.748 [INFO][5309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0 goldmane-78d55f7ddc- calico-system 2b70e4e3-90ce-46ab-b026-f5df1c2548db 809 0 2025-05-27 03:23:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-64 goldmane-78d55f7ddc-ghcj4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali01e756a1ef0 [] [] }} ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.748 [INFO][5309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.854 [INFO][5335] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" HandleID="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Workload="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.855 [INFO][5335] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" HandleID="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Workload="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d99a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-64", "pod":"goldmane-78d55f7ddc-ghcj4", "timestamp":"2025-05-27 03:24:20.854804719 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.855 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.921 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.921 [INFO][5335] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:20.975 [INFO][5335] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.002 [INFO][5335] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.019 [INFO][5335] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.024 [INFO][5335] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.032 [INFO][5335] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.033 [INFO][5335] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.038 [INFO][5335] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.054 [INFO][5335] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.074 [INFO][5335] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.135/26] block=192.168.19.128/26 handle="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.074 [INFO][5335] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.135/26] handle="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" host="ip-172-31-28-64" May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.074 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:21.152417 containerd[1990]: 2025-05-27 03:24:21.074 [INFO][5335] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.135/26] IPv6=[] ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" HandleID="k8s-pod-network.36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Workload="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" May 27 03:24:21.154604 containerd[1990]: 2025-05-27 03:24:21.092 [INFO][5309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"2b70e4e3-90ce-46ab-b026-f5df1c2548db", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"goldmane-78d55f7ddc-ghcj4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01e756a1ef0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:21.154604 containerd[1990]: 2025-05-27 03:24:21.092 [INFO][5309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.135/32] ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" May 27 03:24:21.154604 containerd[1990]: 2025-05-27 03:24:21.092 [INFO][5309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01e756a1ef0 ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" May 27 03:24:21.154604 containerd[1990]: 2025-05-27 03:24:21.105 [INFO][5309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" May 27 03:24:21.154604 containerd[1990]: 2025-05-27 03:24:21.108 [INFO][5309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"2b70e4e3-90ce-46ab-b026-f5df1c2548db", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea", Pod:"goldmane-78d55f7ddc-ghcj4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01e756a1ef0", MAC:"ca:07:ad:4c:9a:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:21.154604 containerd[1990]: 2025-05-27 03:24:21.133 [INFO][5309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-ghcj4" WorkloadEndpoint="ip--172--31--28--64-k8s-goldmane--78d55f7ddc--ghcj4-eth0" May 27 03:24:21.191102 systemd-networkd[1806]: calie8a5fef2290: Link UP May 27 03:24:21.193926 systemd-networkd[1806]: calie8a5fef2290: Gained carrier May 27 03:24:21.264967 systemd[1]: Started cri-containerd-3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885.scope - libcontainer container 3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885. May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:20.744 [INFO][5290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0 calico-apiserver-699c564668- calico-apiserver d81b5d20-78b9-4ebb-9eb4-dc51405b66e7 811 0 2025-05-27 03:23:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699c564668 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-64 calico-apiserver-699c564668-q4mpt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie8a5fef2290 [] [] }} ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:20.745 [INFO][5290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:20.871 [INFO][5329] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" HandleID="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Workload="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:20.873 [INFO][5329] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" HandleID="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Workload="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-64", "pod":"calico-apiserver-699c564668-q4mpt", "timestamp":"2025-05-27 03:24:20.871670401 +0000 UTC"}, Hostname:"ip-172-31-28-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:20.874 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.076 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.076 [INFO][5329] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-64' May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.095 [INFO][5329] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.107 [INFO][5329] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.127 [INFO][5329] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.133 [INFO][5329] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.142 [INFO][5329] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.142 [INFO][5329] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.146 [INFO][5329] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.157 [INFO][5329] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.175 [INFO][5329] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.136/26] block=192.168.19.128/26 handle="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.175 [INFO][5329] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.136/26] handle="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" host="ip-172-31-28-64" May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.176 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:21.269401 containerd[1990]: 2025-05-27 03:24:21.176 [INFO][5329] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.136/26] IPv6=[] ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" HandleID="k8s-pod-network.eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Workload="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" May 27 03:24:21.271681 containerd[1990]: 2025-05-27 03:24:21.184 [INFO][5290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0", GenerateName:"calico-apiserver-699c564668-", Namespace:"calico-apiserver", SelfLink:"", UID:"d81b5d20-78b9-4ebb-9eb4-dc51405b66e7", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c564668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"", Pod:"calico-apiserver-699c564668-q4mpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8a5fef2290", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:21.271681 containerd[1990]: 2025-05-27 03:24:21.185 [INFO][5290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.136/32] ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" May 27 03:24:21.271681 containerd[1990]: 2025-05-27 03:24:21.185 [INFO][5290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8a5fef2290 ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" May 27 03:24:21.271681 containerd[1990]: 2025-05-27 03:24:21.218 [INFO][5290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" May 27 03:24:21.271681 containerd[1990]: 2025-05-27 03:24:21.220 [INFO][5290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0", GenerateName:"calico-apiserver-699c564668-", Namespace:"calico-apiserver", SelfLink:"", UID:"d81b5d20-78b9-4ebb-9eb4-dc51405b66e7", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c564668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-64", ContainerID:"eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d", Pod:"calico-apiserver-699c564668-q4mpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8a5fef2290", MAC:"4e:5b:93:5d:4f:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:21.271681 containerd[1990]: 2025-05-27 03:24:21.261 [INFO][5290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" Namespace="calico-apiserver" Pod="calico-apiserver-699c564668-q4mpt" WorkloadEndpoint="ip--172--31--28--64-k8s-calico--apiserver--699c564668--q4mpt-eth0" May 27 03:24:21.341862 containerd[1990]: time="2025-05-27T03:24:21.341821914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-27b29,Uid:5b0dc86a-7a64-45f7-952b-6e3978d12edf,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885\"" May 27 03:24:21.372754 containerd[1990]: time="2025-05-27T03:24:21.372708248Z" level=info msg="connecting to shim 36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea" address="unix:///run/containerd/s/c46ef1a4848722675e28b9c81c1a86700d9824ac6503b7cb06a9724ba603f825" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:21.400972 containerd[1990]: time="2025-05-27T03:24:21.400919942Z" level=info msg="connecting to shim eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d" address="unix:///run/containerd/s/2db37a51332579a4efcf575d2a6a5d7d539f8a894974b1f56dbb9a10dea79291" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:21.438699 systemd[1]: Started cri-containerd-36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea.scope - libcontainer container 36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea. May 27 03:24:21.456782 systemd[1]: Started cri-containerd-eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d.scope - libcontainer container eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d. May 27 03:24:21.588106 containerd[1990]: time="2025-05-27T03:24:21.586834118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c564668-q4mpt,Uid:d81b5d20-78b9-4ebb-9eb4-dc51405b66e7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d\"" May 27 03:24:21.597667 containerd[1990]: time="2025-05-27T03:24:21.597625836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-ghcj4,Uid:2b70e4e3-90ce-46ab-b026-f5df1c2548db,Namespace:calico-system,Attempt:0,} returns sandbox id \"36976ab4e7ef8e3595a38d73f4f43a876300bb7b55e95509d622b8d1eb227fea\"" May 27 03:24:21.775768 containerd[1990]: time="2025-05-27T03:24:21.775722826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:21.777807 containerd[1990]: time="2025-05-27T03:24:21.777720752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:24:21.779833 containerd[1990]: time="2025-05-27T03:24:21.779760314Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:21.782930 containerd[1990]: time="2025-05-27T03:24:21.782868642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:21.783721 containerd[1990]: time="2025-05-27T03:24:21.783649795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.672752653s" May 27 03:24:21.783721 containerd[1990]: time="2025-05-27T03:24:21.783690872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:21.785997 containerd[1990]: time="2025-05-27T03:24:21.785741798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:24:21.798056 containerd[1990]: time="2025-05-27T03:24:21.797445533Z" level=info msg="CreateContainer within sandbox \"296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:21.822323 containerd[1990]: time="2025-05-27T03:24:21.821629461Z" level=info msg="Container 67d64096a757c7ca06aa4c3b9041910f493ef4a03a5edaae9487fe4d746e87a2: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:21.834533 containerd[1990]: time="2025-05-27T03:24:21.834490886Z" level=info msg="CreateContainer within sandbox \"296dcdb1198d2b1b063dcff32051b649e6596b24b2f54c0d7c9c0d2aefe2490d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"67d64096a757c7ca06aa4c3b9041910f493ef4a03a5edaae9487fe4d746e87a2\"" May 27 03:24:21.835611 containerd[1990]: time="2025-05-27T03:24:21.835556565Z" level=info msg="StartContainer for \"67d64096a757c7ca06aa4c3b9041910f493ef4a03a5edaae9487fe4d746e87a2\"" May 27 03:24:21.836858 containerd[1990]: time="2025-05-27T03:24:21.836835187Z" level=info msg="connecting to shim 67d64096a757c7ca06aa4c3b9041910f493ef4a03a5edaae9487fe4d746e87a2" address="unix:///run/containerd/s/72c14df223f3d8abc1e8e762688aaa09cd357558549481760bd02ddb70b26ae1" protocol=ttrpc version=3 May 27 03:24:21.856661 systemd[1]: Started cri-containerd-67d64096a757c7ca06aa4c3b9041910f493ef4a03a5edaae9487fe4d746e87a2.scope - libcontainer container 67d64096a757c7ca06aa4c3b9041910f493ef4a03a5edaae9487fe4d746e87a2. May 27 03:24:21.920854 containerd[1990]: time="2025-05-27T03:24:21.920737128Z" level=info msg="StartContainer for \"67d64096a757c7ca06aa4c3b9041910f493ef4a03a5edaae9487fe4d746e87a2\" returns successfully" May 27 03:24:21.969979 kubelet[3234]: I0527 03:24:21.969903 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699c564668-bfrlq" podStartSLOduration=26.295204376 podStartE2EDuration="29.969883201s" podCreationTimestamp="2025-05-27 03:23:52 +0000 UTC" firstStartedPulling="2025-05-27 03:24:18.110387865 +0000 UTC m=+41.706041707" lastFinishedPulling="2025-05-27 03:24:21.785066713 +0000 UTC m=+45.380720532" observedRunningTime="2025-05-27 03:24:21.969058944 +0000 UTC m=+45.564712813" watchObservedRunningTime="2025-05-27 03:24:21.969883201 +0000 UTC m=+45.565537043" May 27 03:24:22.408714 systemd-networkd[1806]: cali6e1468ff86b: Gained IPv6LL May 27 03:24:23.113131 systemd-networkd[1806]: cali01e756a1ef0: Gained IPv6LL May 27 03:24:23.240671 systemd-networkd[1806]: calie8a5fef2290: Gained IPv6LL May 27 03:24:24.353164 systemd[1]: Started sshd@7-172.31.28.64:22-139.178.68.195:41580.service - OpenSSH per-connection server daemon (139.178.68.195:41580). May 27 03:24:24.699076 sshd[5566]: Accepted publickey for core from 139.178.68.195 port 41580 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:24.704326 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:24.715364 systemd-logind[1977]: New session 8 of user core. May 27 03:24:24.721732 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:24:25.845453 sshd[5574]: Connection closed by 139.178.68.195 port 41580 May 27 03:24:25.847241 sshd-session[5566]: pam_unix(sshd:session): session closed for user core May 27 03:24:25.869182 systemd[1]: sshd@7-172.31.28.64:22-139.178.68.195:41580.service: Deactivated successfully. May 27 03:24:25.872333 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:24:25.875399 systemd-logind[1977]: Session 8 logged out. Waiting for processes to exit. May 27 03:24:25.879394 systemd-logind[1977]: Removed session 8. May 27 03:24:25.957832 ntpd[1971]: Listen normally on 8 vxlan.calico 192.168.19.128:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 8 vxlan.calico 192.168.19.128:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 9 cali861227efceb [fe80::ecee:eeff:feee:eeee%4]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 10 vxlan.calico [fe80::6416:d4ff:fed0:2df3%5]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 11 cali6d40706dd92 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 12 calied0070f564c [fe80::ecee:eeff:feee:eeee%9]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 13 cali985ec00e8a0 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 14 cali399b175caea [fe80::ecee:eeff:feee:eeee%11]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 15 cali6e1468ff86b [fe80::ecee:eeff:feee:eeee%12]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 16 cali01e756a1ef0 [fe80::ecee:eeff:feee:eeee%13]:123 May 27 03:24:25.960134 ntpd[1971]: 27 May 03:24:25 ntpd[1971]: Listen normally on 17 calie8a5fef2290 [fe80::ecee:eeff:feee:eeee%14]:123 May 27 03:24:25.957901 ntpd[1971]: Listen normally on 9 cali861227efceb [fe80::ecee:eeff:feee:eeee%4]:123 May 27 03:24:25.957941 ntpd[1971]: Listen normally on 10 vxlan.calico [fe80::6416:d4ff:fed0:2df3%5]:123 May 27 03:24:25.957968 ntpd[1971]: Listen normally on 11 cali6d40706dd92 [fe80::ecee:eeff:feee:eeee%8]:123 May 27 03:24:25.957996 ntpd[1971]: Listen normally on 12 calied0070f564c [fe80::ecee:eeff:feee:eeee%9]:123 May 27 03:24:25.958027 ntpd[1971]: Listen normally on 13 cali985ec00e8a0 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 03:24:25.958057 ntpd[1971]: Listen normally on 14 cali399b175caea [fe80::ecee:eeff:feee:eeee%11]:123 May 27 03:24:25.958082 ntpd[1971]: Listen normally on 15 cali6e1468ff86b [fe80::ecee:eeff:feee:eeee%12]:123 May 27 03:24:25.958106 ntpd[1971]: Listen normally on 16 cali01e756a1ef0 [fe80::ecee:eeff:feee:eeee%13]:123 May 27 03:24:25.958133 ntpd[1971]: Listen normally on 17 calie8a5fef2290 [fe80::ecee:eeff:feee:eeee%14]:123 May 27 03:24:26.510061 containerd[1990]: time="2025-05-27T03:24:26.510004394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:26.593603 containerd[1990]: time="2025-05-27T03:24:26.592465364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:24:26.607293 containerd[1990]: time="2025-05-27T03:24:26.607252145Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:26.610649 containerd[1990]: time="2025-05-27T03:24:26.610499106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:26.612058 containerd[1990]: time="2025-05-27T03:24:26.612017142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.825348659s" May 27 03:24:26.612058 containerd[1990]: time="2025-05-27T03:24:26.612057672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:24:26.614412 containerd[1990]: time="2025-05-27T03:24:26.614385587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:24:26.680526 containerd[1990]: time="2025-05-27T03:24:26.680403002Z" level=info msg="CreateContainer within sandbox \"83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:24:26.771527 containerd[1990]: time="2025-05-27T03:24:26.770703440Z" level=info msg="Container ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:26.786344 containerd[1990]: time="2025-05-27T03:24:26.786299966Z" level=info msg="CreateContainer within sandbox \"83f733f5eb6ebb8df7eea06beb3599cb96b3ede078cd2589652d5407c69163fc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\"" May 27 03:24:26.787313 containerd[1990]: time="2025-05-27T03:24:26.787252826Z" level=info msg="StartContainer for \"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\"" May 27 03:24:26.788674 containerd[1990]: time="2025-05-27T03:24:26.788268410Z" level=info msg="connecting to shim ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9" address="unix:///run/containerd/s/14ee11f5ad8fa29b30d9f496b8126529996ebd4370cf78c81535105dc8e3792c" protocol=ttrpc version=3 May 27 03:24:26.868744 systemd[1]: Started cri-containerd-ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9.scope - libcontainer container ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9. May 27 03:24:26.988387 containerd[1990]: time="2025-05-27T03:24:26.987364676Z" level=info msg="StartContainer for \"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\" returns successfully" May 27 03:24:28.024766 containerd[1990]: time="2025-05-27T03:24:28.024715557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:28.026888 containerd[1990]: time="2025-05-27T03:24:28.026811755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:24:28.029083 containerd[1990]: time="2025-05-27T03:24:28.028744098Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:28.032153 containerd[1990]: time="2025-05-27T03:24:28.032125174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:28.034080 containerd[1990]: time="2025-05-27T03:24:28.034055420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.419635821s" May 27 03:24:28.034214 containerd[1990]: time="2025-05-27T03:24:28.034201149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:24:28.056668 containerd[1990]: time="2025-05-27T03:24:28.056562521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:28.066633 containerd[1990]: time="2025-05-27T03:24:28.066196048Z" level=info msg="CreateContainer within sandbox \"3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:24:28.143812 containerd[1990]: time="2025-05-27T03:24:28.143755614Z" level=info msg="Container 3a24dcf959e38fa4647343d3837644c5a2571b466edcf6e8d84830ff0d1941e8: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:28.186880 containerd[1990]: time="2025-05-27T03:24:28.186843679Z" level=info msg="CreateContainer within sandbox \"3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3a24dcf959e38fa4647343d3837644c5a2571b466edcf6e8d84830ff0d1941e8\"" May 27 03:24:28.187849 containerd[1990]: time="2025-05-27T03:24:28.187825546Z" level=info msg="StartContainer for \"3a24dcf959e38fa4647343d3837644c5a2571b466edcf6e8d84830ff0d1941e8\"" May 27 03:24:28.189196 containerd[1990]: time="2025-05-27T03:24:28.189164539Z" level=info msg="connecting to shim 3a24dcf959e38fa4647343d3837644c5a2571b466edcf6e8d84830ff0d1941e8" address="unix:///run/containerd/s/104fe3602319298c13b2b2da3ea41106043cb85dc6756c81eb7f9559ebc78f8f" protocol=ttrpc version=3 May 27 03:24:28.222173 systemd[1]: Started cri-containerd-3a24dcf959e38fa4647343d3837644c5a2571b466edcf6e8d84830ff0d1941e8.scope - libcontainer container 3a24dcf959e38fa4647343d3837644c5a2571b466edcf6e8d84830ff0d1941e8. May 27 03:24:28.322751 containerd[1990]: time="2025-05-27T03:24:28.322474346Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\" id:\"00e990cb04d9904a6329a43a80fe4f501aa8a99b98d39e3ff8ca6fd79bdbaf90\" pid:5660 exited_at:{seconds:1748316268 nanos:290497703}" May 27 03:24:28.326471 containerd[1990]: time="2025-05-27T03:24:28.326424028Z" level=info msg="StartContainer for \"3a24dcf959e38fa4647343d3837644c5a2571b466edcf6e8d84830ff0d1941e8\" returns successfully" May 27 03:24:28.384731 kubelet[3234]: I0527 03:24:28.381889 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-744b9f4d8b-xp4nf" podStartSLOduration=23.938439743 podStartE2EDuration="32.380894126s" podCreationTimestamp="2025-05-27 03:23:56 +0000 UTC" firstStartedPulling="2025-05-27 03:24:18.171016594 +0000 UTC m=+41.766670421" lastFinishedPulling="2025-05-27 03:24:26.613470985 +0000 UTC m=+50.209124804" observedRunningTime="2025-05-27 03:24:28.077819653 +0000 UTC m=+51.673473495" watchObservedRunningTime="2025-05-27 03:24:28.380894126 +0000 UTC m=+51.976547964" May 27 03:24:28.465733 containerd[1990]: time="2025-05-27T03:24:28.465678648Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:28.468060 containerd[1990]: time="2025-05-27T03:24:28.467605869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:24:28.469384 containerd[1990]: time="2025-05-27T03:24:28.469349859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 412.75341ms" May 27 03:24:28.469384 containerd[1990]: time="2025-05-27T03:24:28.469382058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:28.470387 containerd[1990]: time="2025-05-27T03:24:28.470210393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:28.477765 containerd[1990]: time="2025-05-27T03:24:28.477731029Z" level=info msg="CreateContainer within sandbox \"eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:28.500513 containerd[1990]: time="2025-05-27T03:24:28.498632439Z" level=info msg="Container d896dc25b92b016e760e89d3996b134dfe7ef2ecae9fdd55028ceb63087e3d63: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:28.506923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939139213.mount: Deactivated successfully. May 27 03:24:28.514202 containerd[1990]: time="2025-05-27T03:24:28.514164193Z" level=info msg="CreateContainer within sandbox \"eacf7703bd3da803db05dc7f25d343424b0c8e1e1b662407eef5156f76f8097d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d896dc25b92b016e760e89d3996b134dfe7ef2ecae9fdd55028ceb63087e3d63\"" May 27 03:24:28.515560 containerd[1990]: time="2025-05-27T03:24:28.514803258Z" level=info msg="StartContainer for \"d896dc25b92b016e760e89d3996b134dfe7ef2ecae9fdd55028ceb63087e3d63\"" May 27 03:24:28.516068 containerd[1990]: time="2025-05-27T03:24:28.516041573Z" level=info msg="connecting to shim d896dc25b92b016e760e89d3996b134dfe7ef2ecae9fdd55028ceb63087e3d63" address="unix:///run/containerd/s/2db37a51332579a4efcf575d2a6a5d7d539f8a894974b1f56dbb9a10dea79291" protocol=ttrpc version=3 May 27 03:24:28.534968 systemd[1]: Started cri-containerd-d896dc25b92b016e760e89d3996b134dfe7ef2ecae9fdd55028ceb63087e3d63.scope - libcontainer container d896dc25b92b016e760e89d3996b134dfe7ef2ecae9fdd55028ceb63087e3d63. May 27 03:24:28.618764 containerd[1990]: time="2025-05-27T03:24:28.618728030Z" level=info msg="StartContainer for \"d896dc25b92b016e760e89d3996b134dfe7ef2ecae9fdd55028ceb63087e3d63\" returns successfully" May 27 03:24:28.657180 containerd[1990]: time="2025-05-27T03:24:28.657096303Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:28.659270 containerd[1990]: time="2025-05-27T03:24:28.659204436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:28.659442 containerd[1990]: time="2025-05-27T03:24:28.659242956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:28.679414 kubelet[3234]: E0527 03:24:28.679346 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:28.680598 kubelet[3234]: E0527 03:24:28.679424 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:28.680929 containerd[1990]: time="2025-05-27T03:24:28.680897285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:24:28.706143 kubelet[3234]: E0527 03:24:28.705507 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt7fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-ghcj4_calico-system(2b70e4e3-90ce-46ab-b026-f5df1c2548db): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:28.710536 kubelet[3234]: E0527 03:24:28.709752 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:24:29.102497 kubelet[3234]: E0527 03:24:29.102442 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:24:29.156790 kubelet[3234]: I0527 03:24:29.156680 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699c564668-q4mpt" podStartSLOduration=30.318426705 podStartE2EDuration="37.156658311s" podCreationTimestamp="2025-05-27 03:23:52 +0000 UTC" firstStartedPulling="2025-05-27 03:24:21.631813448 +0000 UTC m=+45.227467280" lastFinishedPulling="2025-05-27 03:24:28.470045067 +0000 UTC m=+52.065698886" observedRunningTime="2025-05-27 03:24:29.154370199 +0000 UTC m=+52.750024041" watchObservedRunningTime="2025-05-27 03:24:29.156658311 +0000 UTC m=+52.752312155" May 27 03:24:30.133458 kubelet[3234]: I0527 03:24:30.133383 3234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:30.882024 systemd[1]: Started sshd@8-172.31.28.64:22-139.178.68.195:41588.service - OpenSSH per-connection server daemon (139.178.68.195:41588). May 27 03:24:31.118155 sshd[5738]: Accepted publickey for core from 139.178.68.195 port 41588 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:31.121874 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:31.128102 systemd-logind[1977]: New session 9 of user core. May 27 03:24:31.133688 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:24:32.090004 sshd[5740]: Connection closed by 139.178.68.195 port 41588 May 27 03:24:32.090683 sshd-session[5738]: pam_unix(sshd:session): session closed for user core May 27 03:24:32.095212 systemd-logind[1977]: Session 9 logged out. Waiting for processes to exit. May 27 03:24:32.095806 systemd[1]: sshd@8-172.31.28.64:22-139.178.68.195:41588.service: Deactivated successfully. May 27 03:24:32.098194 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:24:32.100261 systemd-logind[1977]: Removed session 9. May 27 03:24:32.556186 containerd[1990]: time="2025-05-27T03:24:32.555780424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:32.557680 containerd[1990]: time="2025-05-27T03:24:32.557638598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:24:32.563386 containerd[1990]: time="2025-05-27T03:24:32.562252674Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:32.570499 containerd[1990]: time="2025-05-27T03:24:32.570399222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:32.572870 containerd[1990]: time="2025-05-27T03:24:32.572829273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 3.891593136s" May 27 03:24:32.572980 containerd[1990]: time="2025-05-27T03:24:32.572896317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:24:32.578948 containerd[1990]: time="2025-05-27T03:24:32.578917787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:32.590465 containerd[1990]: time="2025-05-27T03:24:32.590425807Z" level=info msg="CreateContainer within sandbox \"3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:24:32.611444 containerd[1990]: time="2025-05-27T03:24:32.607183094Z" level=info msg="Container 806c4ba7f0d49eb1c0ad6dfea3e156d291dd0d00c015327469185f0e2d4608f7: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:32.626144 containerd[1990]: time="2025-05-27T03:24:32.626109564Z" level=info msg="CreateContainer within sandbox \"3d0c6308148fbbb70ba3b8e9f99b20cbdc67c70308bb38cdc0d91d4418fb8885\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"806c4ba7f0d49eb1c0ad6dfea3e156d291dd0d00c015327469185f0e2d4608f7\"" May 27 03:24:32.627657 containerd[1990]: time="2025-05-27T03:24:32.627551911Z" level=info msg="StartContainer for \"806c4ba7f0d49eb1c0ad6dfea3e156d291dd0d00c015327469185f0e2d4608f7\"" May 27 03:24:32.630048 containerd[1990]: time="2025-05-27T03:24:32.630017515Z" level=info msg="connecting to shim 806c4ba7f0d49eb1c0ad6dfea3e156d291dd0d00c015327469185f0e2d4608f7" address="unix:///run/containerd/s/104fe3602319298c13b2b2da3ea41106043cb85dc6756c81eb7f9559ebc78f8f" protocol=ttrpc version=3 May 27 03:24:32.654724 systemd[1]: Started cri-containerd-806c4ba7f0d49eb1c0ad6dfea3e156d291dd0d00c015327469185f0e2d4608f7.scope - libcontainer container 806c4ba7f0d49eb1c0ad6dfea3e156d291dd0d00c015327469185f0e2d4608f7. May 27 03:24:32.698408 containerd[1990]: time="2025-05-27T03:24:32.698374737Z" level=info msg="StartContainer for \"806c4ba7f0d49eb1c0ad6dfea3e156d291dd0d00c015327469185f0e2d4608f7\" returns successfully" May 27 03:24:32.761421 containerd[1990]: time="2025-05-27T03:24:32.761342471Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:32.763653 containerd[1990]: time="2025-05-27T03:24:32.763604282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:32.763761 containerd[1990]: time="2025-05-27T03:24:32.763715567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:32.767657 kubelet[3234]: E0527 03:24:32.767585 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:32.790986 kubelet[3234]: E0527 03:24:32.790920 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:32.791576 kubelet[3234]: E0527 03:24:32.791147 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a52bb5dcaf854b368beda8a6c3b5e697,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:32.794176 containerd[1990]: time="2025-05-27T03:24:32.794139097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:32.982391 containerd[1990]: time="2025-05-27T03:24:32.982343416Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:32.984535 containerd[1990]: time="2025-05-27T03:24:32.984467207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:32.984714 containerd[1990]: time="2025-05-27T03:24:32.984505204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:32.984775 kubelet[3234]: E0527 03:24:32.984734 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:32.984835 kubelet[3234]: E0527 03:24:32.984791 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:32.985319 kubelet[3234]: E0527 03:24:32.985270 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:32.994290 kubelet[3234]: E0527 03:24:32.994229 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:24:33.820218 kubelet[3234]: I0527 03:24:33.815780 3234 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:24:33.833579 kubelet[3234]: I0527 03:24:33.833530 3234 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:24:34.131894 kubelet[3234]: I0527 03:24:34.131854 3234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:34.173649 kubelet[3234]: I0527 03:24:34.169803 3234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-27b29" podStartSLOduration=26.940914778 podStartE2EDuration="38.167145466s" podCreationTimestamp="2025-05-27 03:23:56 +0000 UTC" firstStartedPulling="2025-05-27 03:24:21.35192463 +0000 UTC m=+44.947578464" lastFinishedPulling="2025-05-27 03:24:32.578155319 +0000 UTC m=+56.173809152" observedRunningTime="2025-05-27 03:24:33.237396848 +0000 UTC m=+56.833050690" watchObservedRunningTime="2025-05-27 03:24:34.167145466 +0000 UTC m=+57.762799298" May 27 03:24:37.129269 systemd[1]: Started sshd@9-172.31.28.64:22-139.178.68.195:57920.service - OpenSSH per-connection server daemon (139.178.68.195:57920). May 27 03:24:37.380704 sshd[5801]: Accepted publickey for core from 139.178.68.195 port 57920 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:37.384509 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:37.390210 systemd-logind[1977]: New session 10 of user core. May 27 03:24:37.394766 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:24:37.897049 sshd[5805]: Connection closed by 139.178.68.195 port 57920 May 27 03:24:37.897598 sshd-session[5801]: pam_unix(sshd:session): session closed for user core May 27 03:24:37.901284 systemd[1]: sshd@9-172.31.28.64:22-139.178.68.195:57920.service: Deactivated successfully. May 27 03:24:37.903398 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:24:37.904281 systemd-logind[1977]: Session 10 logged out. Waiting for processes to exit. May 27 03:24:37.906062 systemd-logind[1977]: Removed session 10. May 27 03:24:37.930178 systemd[1]: Started sshd@10-172.31.28.64:22-139.178.68.195:57924.service - OpenSSH per-connection server daemon (139.178.68.195:57924). May 27 03:24:38.120811 sshd[5818]: Accepted publickey for core from 139.178.68.195 port 57924 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:38.122263 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:38.127301 systemd-logind[1977]: New session 11 of user core. May 27 03:24:38.131689 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:24:38.379115 sshd[5820]: Connection closed by 139.178.68.195 port 57924 May 27 03:24:38.380698 sshd-session[5818]: pam_unix(sshd:session): session closed for user core May 27 03:24:38.391215 systemd[1]: sshd@10-172.31.28.64:22-139.178.68.195:57924.service: Deactivated successfully. May 27 03:24:38.391985 systemd-logind[1977]: Session 11 logged out. Waiting for processes to exit. May 27 03:24:38.397454 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:24:38.411920 systemd-logind[1977]: Removed session 11. May 27 03:24:38.414704 systemd[1]: Started sshd@11-172.31.28.64:22-139.178.68.195:57928.service - OpenSSH per-connection server daemon (139.178.68.195:57928). May 27 03:24:38.585410 sshd[5830]: Accepted publickey for core from 139.178.68.195 port 57928 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:38.586980 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:38.591537 systemd-logind[1977]: New session 12 of user core. May 27 03:24:38.596627 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:24:38.795899 sshd[5832]: Connection closed by 139.178.68.195 port 57928 May 27 03:24:38.796692 sshd-session[5830]: pam_unix(sshd:session): session closed for user core May 27 03:24:38.799798 systemd[1]: sshd@11-172.31.28.64:22-139.178.68.195:57928.service: Deactivated successfully. May 27 03:24:38.802678 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:24:38.804197 systemd-logind[1977]: Session 12 logged out. Waiting for processes to exit. May 27 03:24:38.805931 systemd-logind[1977]: Removed session 12. May 27 03:24:39.569042 containerd[1990]: time="2025-05-27T03:24:39.568982744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\" id:\"ad08840722800b6a3bc3f0c24ca7aa1585f1be14c2e8aa4d55edeac639907538\" pid:5858 exited_at:{seconds:1748316279 nanos:568628601}" May 27 03:24:42.566962 containerd[1990]: time="2025-05-27T03:24:42.566883251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:42.742327 containerd[1990]: time="2025-05-27T03:24:42.742278993Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:42.745345 containerd[1990]: time="2025-05-27T03:24:42.745185896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:42.745345 containerd[1990]: time="2025-05-27T03:24:42.745245320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:42.793365 kubelet[3234]: E0527 03:24:42.793303 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:42.793926 kubelet[3234]: E0527 03:24:42.793371 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:42.799614 kubelet[3234]: E0527 03:24:42.799528 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt7fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-ghcj4_calico-system(2b70e4e3-90ce-46ab-b026-f5df1c2548db): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:42.800953 kubelet[3234]: E0527 03:24:42.800722 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:24:43.830978 systemd[1]: Started sshd@12-172.31.28.64:22-139.178.68.195:49454.service - OpenSSH per-connection server daemon (139.178.68.195:49454). May 27 03:24:44.015578 sshd[5876]: Accepted publickey for core from 139.178.68.195 port 49454 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:44.016673 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:44.021744 systemd-logind[1977]: New session 13 of user core. May 27 03:24:44.029683 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:24:44.498053 sshd[5878]: Connection closed by 139.178.68.195 port 49454 May 27 03:24:44.499111 sshd-session[5876]: pam_unix(sshd:session): session closed for user core May 27 03:24:44.502224 systemd[1]: sshd@12-172.31.28.64:22-139.178.68.195:49454.service: Deactivated successfully. May 27 03:24:44.504413 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:24:44.505719 systemd-logind[1977]: Session 13 logged out. Waiting for processes to exit. May 27 03:24:44.508132 systemd-logind[1977]: Removed session 13. May 27 03:24:45.111382 containerd[1990]: time="2025-05-27T03:24:45.111339766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\" id:\"aa25d6c76c32e534ea3970688c846480857a9a1d03c26e37d2b737bc1296d0c8\" pid:5902 exited_at:{seconds:1748316285 nanos:111065879}" May 27 03:24:47.563598 kubelet[3234]: E0527 03:24:47.563475 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:24:49.539715 systemd[1]: Started sshd@13-172.31.28.64:22-139.178.68.195:49468.service - OpenSSH per-connection server daemon (139.178.68.195:49468). May 27 03:24:49.771190 sshd[5915]: Accepted publickey for core from 139.178.68.195 port 49468 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:49.774744 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:49.780449 systemd-logind[1977]: New session 14 of user core. May 27 03:24:49.784669 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:24:50.155192 sshd[5919]: Connection closed by 139.178.68.195 port 49468 May 27 03:24:50.156267 sshd-session[5915]: pam_unix(sshd:session): session closed for user core May 27 03:24:50.161026 systemd-logind[1977]: Session 14 logged out. Waiting for processes to exit. May 27 03:24:50.161698 systemd[1]: sshd@13-172.31.28.64:22-139.178.68.195:49468.service: Deactivated successfully. May 27 03:24:50.163821 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:24:50.166147 systemd-logind[1977]: Removed session 14. May 27 03:24:55.191057 systemd[1]: Started sshd@14-172.31.28.64:22-139.178.68.195:35854.service - OpenSSH per-connection server daemon (139.178.68.195:35854). May 27 03:24:55.364587 sshd[5933]: Accepted publickey for core from 139.178.68.195 port 35854 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:24:55.366015 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:55.371628 systemd-logind[1977]: New session 15 of user core. May 27 03:24:55.375674 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:24:55.668783 sshd[5935]: Connection closed by 139.178.68.195 port 35854 May 27 03:24:55.676585 sshd-session[5933]: pam_unix(sshd:session): session closed for user core May 27 03:24:55.686182 systemd[1]: sshd@14-172.31.28.64:22-139.178.68.195:35854.service: Deactivated successfully. May 27 03:24:55.689854 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:24:55.692105 systemd-logind[1977]: Session 15 logged out. Waiting for processes to exit. May 27 03:24:55.693687 systemd-logind[1977]: Removed session 15. May 27 03:24:57.563934 kubelet[3234]: E0527 03:24:57.563852 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:24:58.060103 containerd[1990]: time="2025-05-27T03:24:58.060037110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\" id:\"3d4d38d753c379939e76bd792c794604a2af2d99e8b8cacd70718964a827a8f1\" pid:5964 exited_at:{seconds:1748316298 nanos:59761813}" May 27 03:24:58.587071 containerd[1990]: time="2025-05-27T03:24:58.587025422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:58.794400 containerd[1990]: time="2025-05-27T03:24:58.794358957Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:58.796607 containerd[1990]: time="2025-05-27T03:24:58.796541877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:58.797272 containerd[1990]: time="2025-05-27T03:24:58.796642231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:58.797552 kubelet[3234]: E0527 03:24:58.796788 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:58.797552 kubelet[3234]: E0527 03:24:58.796836 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:58.797552 kubelet[3234]: E0527 03:24:58.796975 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a52bb5dcaf854b368beda8a6c3b5e697,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:58.800066 containerd[1990]: time="2025-05-27T03:24:58.800036129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:59.055572 containerd[1990]: time="2025-05-27T03:24:59.055457875Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:59.057936 containerd[1990]: time="2025-05-27T03:24:59.057572687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:59.057936 containerd[1990]: time="2025-05-27T03:24:59.057674160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:59.058089 kubelet[3234]: E0527 03:24:59.058000 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:59.058089 kubelet[3234]: E0527 03:24:59.058042 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:59.058882 kubelet[3234]: E0527 03:24:59.058146 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:59.059674 kubelet[3234]: E0527 03:24:59.059634 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:25:00.703981 systemd[1]: Started sshd@15-172.31.28.64:22-139.178.68.195:35856.service - OpenSSH per-connection server daemon (139.178.68.195:35856). May 27 03:25:00.968506 sshd[5975]: Accepted publickey for core from 139.178.68.195 port 35856 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:00.971576 sshd-session[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:00.977599 systemd-logind[1977]: New session 16 of user core. May 27 03:25:00.982712 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:25:02.184127 sshd[5977]: Connection closed by 139.178.68.195 port 35856 May 27 03:25:02.185190 sshd-session[5975]: pam_unix(sshd:session): session closed for user core May 27 03:25:02.189348 systemd[1]: sshd@15-172.31.28.64:22-139.178.68.195:35856.service: Deactivated successfully. May 27 03:25:02.192309 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:25:02.193352 systemd-logind[1977]: Session 16 logged out. Waiting for processes to exit. May 27 03:25:02.196108 systemd-logind[1977]: Removed session 16. May 27 03:25:02.220180 systemd[1]: Started sshd@16-172.31.28.64:22-139.178.68.195:35864.service - OpenSSH per-connection server daemon (139.178.68.195:35864). May 27 03:25:02.399609 sshd[5990]: Accepted publickey for core from 139.178.68.195 port 35864 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:02.401175 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:02.407330 systemd-logind[1977]: New session 17 of user core. May 27 03:25:02.412687 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:25:03.034285 sshd[5992]: Connection closed by 139.178.68.195 port 35864 May 27 03:25:03.035686 sshd-session[5990]: pam_unix(sshd:session): session closed for user core May 27 03:25:03.040282 systemd-logind[1977]: Session 17 logged out. Waiting for processes to exit. May 27 03:25:03.040591 systemd[1]: sshd@16-172.31.28.64:22-139.178.68.195:35864.service: Deactivated successfully. May 27 03:25:03.043757 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:25:03.046878 systemd-logind[1977]: Removed session 17. May 27 03:25:03.074883 systemd[1]: Started sshd@17-172.31.28.64:22-139.178.68.195:35878.service - OpenSSH per-connection server daemon (139.178.68.195:35878). May 27 03:25:03.273551 sshd[6002]: Accepted publickey for core from 139.178.68.195 port 35878 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:03.274656 sshd-session[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:03.279546 systemd-logind[1977]: New session 18 of user core. May 27 03:25:03.285700 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:25:04.264126 sshd[6004]: Connection closed by 139.178.68.195 port 35878 May 27 03:25:04.266134 sshd-session[6002]: pam_unix(sshd:session): session closed for user core May 27 03:25:04.272245 systemd-logind[1977]: Session 18 logged out. Waiting for processes to exit. May 27 03:25:04.273958 systemd[1]: sshd@17-172.31.28.64:22-139.178.68.195:35878.service: Deactivated successfully. May 27 03:25:04.276349 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:25:04.279223 systemd-logind[1977]: Removed session 18. May 27 03:25:04.303089 systemd[1]: Started sshd@18-172.31.28.64:22-139.178.68.195:58016.service - OpenSSH per-connection server daemon (139.178.68.195:58016). May 27 03:25:04.513564 sshd[6024]: Accepted publickey for core from 139.178.68.195 port 58016 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:04.515366 sshd-session[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:04.521561 systemd-logind[1977]: New session 19 of user core. May 27 03:25:04.525685 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:25:05.345610 sshd[6028]: Connection closed by 139.178.68.195 port 58016 May 27 03:25:05.346615 sshd-session[6024]: pam_unix(sshd:session): session closed for user core May 27 03:25:05.353412 systemd-logind[1977]: Session 19 logged out. Waiting for processes to exit. May 27 03:25:05.353859 systemd[1]: sshd@18-172.31.28.64:22-139.178.68.195:58016.service: Deactivated successfully. May 27 03:25:05.356618 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:25:05.359630 systemd-logind[1977]: Removed session 19. May 27 03:25:05.381569 systemd[1]: Started sshd@19-172.31.28.64:22-139.178.68.195:58024.service - OpenSSH per-connection server daemon (139.178.68.195:58024). May 27 03:25:05.560008 sshd[6038]: Accepted publickey for core from 139.178.68.195 port 58024 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:05.561543 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:05.566557 systemd-logind[1977]: New session 20 of user core. May 27 03:25:05.576724 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:25:05.789143 sshd[6040]: Connection closed by 139.178.68.195 port 58024 May 27 03:25:05.789771 sshd-session[6038]: pam_unix(sshd:session): session closed for user core May 27 03:25:05.795769 systemd[1]: sshd@19-172.31.28.64:22-139.178.68.195:58024.service: Deactivated successfully. May 27 03:25:05.799189 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:25:05.800458 systemd-logind[1977]: Session 20 logged out. Waiting for processes to exit. May 27 03:25:05.802340 systemd-logind[1977]: Removed session 20. May 27 03:25:10.824056 systemd[1]: Started sshd@20-172.31.28.64:22-139.178.68.195:58034.service - OpenSSH per-connection server daemon (139.178.68.195:58034). May 27 03:25:11.021647 sshd[6054]: Accepted publickey for core from 139.178.68.195 port 58034 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:11.024173 sshd-session[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:11.029694 systemd-logind[1977]: New session 21 of user core. May 27 03:25:11.034699 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:25:11.322014 sshd[6057]: Connection closed by 139.178.68.195 port 58034 May 27 03:25:11.323347 sshd-session[6054]: pam_unix(sshd:session): session closed for user core May 27 03:25:11.327954 systemd-logind[1977]: Session 21 logged out. Waiting for processes to exit. May 27 03:25:11.328734 systemd[1]: sshd@20-172.31.28.64:22-139.178.68.195:58034.service: Deactivated successfully. May 27 03:25:11.331107 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:25:11.333410 systemd-logind[1977]: Removed session 21. May 27 03:25:12.630097 containerd[1990]: time="2025-05-27T03:25:12.629411588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:25:12.856985 containerd[1990]: time="2025-05-27T03:25:12.856915442Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:12.865099 containerd[1990]: time="2025-05-27T03:25:12.864947302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:12.865099 containerd[1990]: time="2025-05-27T03:25:12.864989462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:25:12.869813 kubelet[3234]: E0527 03:25:12.865246 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:12.869813 kubelet[3234]: E0527 03:25:12.869603 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:12.869813 kubelet[3234]: E0527 03:25:12.869760 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt7fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-ghcj4_calico-system(2b70e4e3-90ce-46ab-b026-f5df1c2548db): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:12.871518 kubelet[3234]: E0527 03:25:12.871459 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:25:14.566239 kubelet[3234]: E0527 03:25:14.566177 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:25:15.665563 containerd[1990]: time="2025-05-27T03:25:15.665464123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\" id:\"8264782b802788e1618d4cb9198ebd2b97a86d2662668afecce70b35efcc34ac\" pid:6083 exited_at:{seconds:1748316315 nanos:539318795}" May 27 03:25:16.361908 systemd[1]: Started sshd@21-172.31.28.64:22-139.178.68.195:46180.service - OpenSSH per-connection server daemon (139.178.68.195:46180). May 27 03:25:16.716162 sshd[6095]: Accepted publickey for core from 139.178.68.195 port 46180 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:16.723690 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:16.741081 systemd-logind[1977]: New session 22 of user core. May 27 03:25:16.744075 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:25:17.825847 sshd[6097]: Connection closed by 139.178.68.195 port 46180 May 27 03:25:17.828682 sshd-session[6095]: pam_unix(sshd:session): session closed for user core May 27 03:25:17.833467 systemd-logind[1977]: Session 22 logged out. Waiting for processes to exit. May 27 03:25:17.833785 systemd[1]: sshd@21-172.31.28.64:22-139.178.68.195:46180.service: Deactivated successfully. May 27 03:25:17.839256 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:25:17.845240 systemd-logind[1977]: Removed session 22. May 27 03:25:22.866796 systemd[1]: Started sshd@22-172.31.28.64:22-139.178.68.195:46188.service - OpenSSH per-connection server daemon (139.178.68.195:46188). May 27 03:25:23.049311 sshd[6109]: Accepted publickey for core from 139.178.68.195 port 46188 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:23.051397 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:23.058760 systemd-logind[1977]: New session 23 of user core. May 27 03:25:23.063683 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:25:23.361387 sshd[6111]: Connection closed by 139.178.68.195 port 46188 May 27 03:25:23.361782 sshd-session[6109]: pam_unix(sshd:session): session closed for user core May 27 03:25:23.365835 systemd[1]: sshd@22-172.31.28.64:22-139.178.68.195:46188.service: Deactivated successfully. May 27 03:25:23.368122 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:25:23.372298 systemd-logind[1977]: Session 23 logged out. Waiting for processes to exit. May 27 03:25:23.374069 systemd-logind[1977]: Removed session 23. May 27 03:25:27.567021 kubelet[3234]: E0527 03:25:27.566963 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:25:27.588546 kubelet[3234]: E0527 03:25:27.588435 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:25:28.078452 containerd[1990]: time="2025-05-27T03:25:28.078319056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\" id:\"d355283c7f2f39277101872c7f841def30179712e115496e385f88120bd49134\" pid:6133 exited_at:{seconds:1748316328 nanos:77865819}" May 27 03:25:28.395555 systemd[1]: Started sshd@23-172.31.28.64:22-139.178.68.195:47708.service - OpenSSH per-connection server daemon (139.178.68.195:47708). May 27 03:25:28.584850 sshd[6145]: Accepted publickey for core from 139.178.68.195 port 47708 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:28.586766 sshd-session[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:28.592005 systemd-logind[1977]: New session 24 of user core. May 27 03:25:28.595666 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:25:29.275613 sshd[6147]: Connection closed by 139.178.68.195 port 47708 May 27 03:25:29.276858 sshd-session[6145]: pam_unix(sshd:session): session closed for user core May 27 03:25:29.289436 systemd[1]: sshd@23-172.31.28.64:22-139.178.68.195:47708.service: Deactivated successfully. May 27 03:25:29.295598 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:25:29.298538 systemd-logind[1977]: Session 24 logged out. Waiting for processes to exit. May 27 03:25:29.302397 systemd-logind[1977]: Removed session 24. May 27 03:25:34.313152 systemd[1]: Started sshd@24-172.31.28.64:22-139.178.68.195:45606.service - OpenSSH per-connection server daemon (139.178.68.195:45606). May 27 03:25:34.521714 sshd[6159]: Accepted publickey for core from 139.178.68.195 port 45606 ssh2: RSA SHA256:Uw58Bn7G+SJd5XoMf+3ukvYab1VfQ8PtnN9pHyXmnUI May 27 03:25:34.523618 sshd-session[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:34.529930 systemd-logind[1977]: New session 25 of user core. May 27 03:25:34.535713 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:25:34.825256 sshd[6161]: Connection closed by 139.178.68.195 port 45606 May 27 03:25:34.826733 sshd-session[6159]: pam_unix(sshd:session): session closed for user core May 27 03:25:34.831216 systemd-logind[1977]: Session 25 logged out. Waiting for processes to exit. May 27 03:25:34.832162 systemd[1]: sshd@24-172.31.28.64:22-139.178.68.195:45606.service: Deactivated successfully. May 27 03:25:34.834452 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:25:34.837164 systemd-logind[1977]: Removed session 25. May 27 03:25:39.552670 containerd[1990]: time="2025-05-27T03:25:39.552558208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\" id:\"bd8424a0e16d7de48dc33c8390977bd88a6bb47a0440a1de082340e900fa9033\" pid:6193 exited_at:{seconds:1748316339 nanos:552171367}" May 27 03:25:41.563009 containerd[1990]: time="2025-05-27T03:25:41.562955927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:25:41.733314 containerd[1990]: time="2025-05-27T03:25:41.733252219Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:41.735643 containerd[1990]: time="2025-05-27T03:25:41.735531683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:41.735643 containerd[1990]: time="2025-05-27T03:25:41.735570922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:25:41.735803 kubelet[3234]: E0527 03:25:41.735765 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:25:41.736208 kubelet[3234]: E0527 03:25:41.735818 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:25:41.736208 kubelet[3234]: E0527 03:25:41.735931 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a52bb5dcaf854b368beda8a6c3b5e697,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:41.738098 containerd[1990]: time="2025-05-27T03:25:41.737891544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:25:41.911341 containerd[1990]: time="2025-05-27T03:25:41.911294184Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:41.913545 containerd[1990]: time="2025-05-27T03:25:41.913476172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:41.913692 containerd[1990]: time="2025-05-27T03:25:41.913503767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:25:41.913918 kubelet[3234]: E0527 03:25:41.913876 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:25:41.914026 kubelet[3234]: E0527 03:25:41.913924 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:25:41.914071 kubelet[3234]: E0527 03:25:41.914035 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw88x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bbfb675-hlbrh_calico-system(e8dfedbc-afe6-4b30-b214-b9a2e156015d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:41.915279 kubelet[3234]: E0527 03:25:41.915230 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:25:42.563189 kubelet[3234]: E0527 03:25:42.563126 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:25:45.213627 containerd[1990]: time="2025-05-27T03:25:45.213579208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dd29212239146f6b79a0c987af28b6c29676e41dd6c21c0c124a094708221d8\" id:\"49f91cc9adf02495f0879483a4145730ad7f884524873c6c6259a067fd5e2674\" pid:6218 exited_at:{seconds:1748316345 nanos:212983502}" May 27 03:25:48.354540 kubelet[3234]: E0527 03:25:48.354466 3234 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-28-64)" May 27 03:25:48.623182 systemd[1]: cri-containerd-492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a.scope: Deactivated successfully. May 27 03:25:48.624540 systemd[1]: cri-containerd-492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a.scope: Consumed 3.220s CPU time, 89M memory peak, 93.1M read from disk. May 27 03:25:48.736946 containerd[1990]: time="2025-05-27T03:25:48.736212560Z" level=info msg="received exit event container_id:\"492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a\" id:\"492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a\" pid:3039 exit_status:1 exited_at:{seconds:1748316348 nanos:735647077}" May 27 03:25:48.769595 containerd[1990]: time="2025-05-27T03:25:48.769550304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a\" id:\"492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a\" pid:3039 exit_status:1 exited_at:{seconds:1748316348 nanos:735647077}" May 27 03:25:48.878323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a-rootfs.mount: Deactivated successfully. May 27 03:25:49.518622 kubelet[3234]: I0527 03:25:49.518576 3234 scope.go:117] "RemoveContainer" containerID="492de38001f49d8166193e091118224bac745eaa0b128bebe91cf08993a8284a" May 27 03:25:49.595003 containerd[1990]: time="2025-05-27T03:25:49.594951617Z" level=info msg="CreateContainer within sandbox \"1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 27 03:25:49.767229 containerd[1990]: time="2025-05-27T03:25:49.767132939Z" level=info msg="Container 778124149d49867fd8702b6db1cc8fcc11aed09bcb304807b6b605f631c044eb: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:49.773177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288859234.mount: Deactivated successfully. May 27 03:25:49.813195 containerd[1990]: time="2025-05-27T03:25:49.813151517Z" level=info msg="CreateContainer within sandbox \"1acb667e87a47bc74730db425c169c502a449682b3bf4073c7f8f4af2a0f0244\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"778124149d49867fd8702b6db1cc8fcc11aed09bcb304807b6b605f631c044eb\"" May 27 03:25:49.825669 containerd[1990]: time="2025-05-27T03:25:49.825622247Z" level=info msg="StartContainer for \"778124149d49867fd8702b6db1cc8fcc11aed09bcb304807b6b605f631c044eb\"" May 27 03:25:49.829640 containerd[1990]: time="2025-05-27T03:25:49.829569750Z" level=info msg="connecting to shim 778124149d49867fd8702b6db1cc8fcc11aed09bcb304807b6b605f631c044eb" address="unix:///run/containerd/s/dd1a0b29785832deb6f0839fbc3ef0fa0124499b83452d5d72fb13a298ba7b6e" protocol=ttrpc version=3 May 27 03:25:49.887935 systemd[1]: Started cri-containerd-778124149d49867fd8702b6db1cc8fcc11aed09bcb304807b6b605f631c044eb.scope - libcontainer container 778124149d49867fd8702b6db1cc8fcc11aed09bcb304807b6b605f631c044eb. May 27 03:25:49.973254 containerd[1990]: time="2025-05-27T03:25:49.973206021Z" level=info msg="StartContainer for \"778124149d49867fd8702b6db1cc8fcc11aed09bcb304807b6b605f631c044eb\" returns successfully" May 27 03:25:50.161129 systemd[1]: cri-containerd-917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200.scope: Deactivated successfully. May 27 03:25:50.162156 systemd[1]: cri-containerd-917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200.scope: Consumed 9.973s CPU time, 102.5M memory peak, 68.6M read from disk. May 27 03:25:50.167403 containerd[1990]: time="2025-05-27T03:25:50.167365959Z" level=info msg="received exit event container_id:\"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\" id:\"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\" pid:3570 exit_status:1 exited_at:{seconds:1748316350 nanos:166930439}" May 27 03:25:50.167694 containerd[1990]: time="2025-05-27T03:25:50.167664981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\" id:\"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\" pid:3570 exit_status:1 exited_at:{seconds:1748316350 nanos:166930439}" May 27 03:25:50.193308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200-rootfs.mount: Deactivated successfully. May 27 03:25:50.513675 kubelet[3234]: I0527 03:25:50.513477 3234 scope.go:117] "RemoveContainer" containerID="917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200" May 27 03:25:50.557855 containerd[1990]: time="2025-05-27T03:25:50.556703611Z" level=info msg="CreateContainer within sandbox \"ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 27 03:25:50.607249 containerd[1990]: time="2025-05-27T03:25:50.607203711Z" level=info msg="Container 78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:50.617950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728186984.mount: Deactivated successfully. May 27 03:25:50.624168 containerd[1990]: time="2025-05-27T03:25:50.624126735Z" level=info msg="CreateContainer within sandbox \"ae2cc073171b4c721f85701dc2b8d7420570c57dd81d766881792e393eb65c36\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8\"" May 27 03:25:50.624706 containerd[1990]: time="2025-05-27T03:25:50.624678200Z" level=info msg="StartContainer for \"78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8\"" May 27 03:25:50.625905 containerd[1990]: time="2025-05-27T03:25:50.625869206Z" level=info msg="connecting to shim 78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8" address="unix:///run/containerd/s/c3c258b8f95a04e5808cac4df96d45bc074137ac1a1b05794276f183c14f2e5a" protocol=ttrpc version=3 May 27 03:25:50.658683 systemd[1]: Started cri-containerd-78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8.scope - libcontainer container 78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8. May 27 03:25:50.707696 containerd[1990]: time="2025-05-27T03:25:50.707616386Z" level=info msg="StartContainer for \"78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8\" returns successfully" May 27 03:25:55.162624 systemd[1]: cri-containerd-a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676.scope: Deactivated successfully. May 27 03:25:55.162991 systemd[1]: cri-containerd-a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676.scope: Consumed 2.923s CPU time, 41M memory peak, 57.2M read from disk. May 27 03:25:55.167991 containerd[1990]: time="2025-05-27T03:25:55.167544591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676\" id:\"a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676\" pid:3069 exit_status:1 exited_at:{seconds:1748316355 nanos:164253751}" May 27 03:25:55.167991 containerd[1990]: time="2025-05-27T03:25:55.167741861Z" level=info msg="received exit event container_id:\"a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676\" id:\"a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676\" pid:3069 exit_status:1 exited_at:{seconds:1748316355 nanos:164253751}" May 27 03:25:55.249751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676-rootfs.mount: Deactivated successfully. May 27 03:25:55.555996 kubelet[3234]: I0527 03:25:55.555899 3234 scope.go:117] "RemoveContainer" containerID="a3a946f6cd3bd59e8f1f849f92c5cf6443998c3eddfc4f1467eeb3d54edb6676" May 27 03:25:55.559192 containerd[1990]: time="2025-05-27T03:25:55.558699487Z" level=info msg="CreateContainer within sandbox \"70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 27 03:25:55.579502 containerd[1990]: time="2025-05-27T03:25:55.579117612Z" level=info msg="Container d98b149928a0d102e42032c47204fa0e3334f136d3a21cb52d6aa71791bcc4b6: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:55.592590 containerd[1990]: time="2025-05-27T03:25:55.592471962Z" level=info msg="CreateContainer within sandbox \"70cd1999e621c5b0039f49d8df04207f04beb09e27d5b9efffd324fefbf90a69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d98b149928a0d102e42032c47204fa0e3334f136d3a21cb52d6aa71791bcc4b6\"" May 27 03:25:55.593130 containerd[1990]: time="2025-05-27T03:25:55.593099464Z" level=info msg="StartContainer for \"d98b149928a0d102e42032c47204fa0e3334f136d3a21cb52d6aa71791bcc4b6\"" May 27 03:25:55.594180 containerd[1990]: time="2025-05-27T03:25:55.594147035Z" level=info msg="connecting to shim d98b149928a0d102e42032c47204fa0e3334f136d3a21cb52d6aa71791bcc4b6" address="unix:///run/containerd/s/5f34323f859a7963fe496f0b3f77ce2da1028961e416f880c18286d27d089f69" protocol=ttrpc version=3 May 27 03:25:55.621683 systemd[1]: Started cri-containerd-d98b149928a0d102e42032c47204fa0e3334f136d3a21cb52d6aa71791bcc4b6.scope - libcontainer container d98b149928a0d102e42032c47204fa0e3334f136d3a21cb52d6aa71791bcc4b6. May 27 03:25:55.675513 containerd[1990]: time="2025-05-27T03:25:55.675461184Z" level=info msg="StartContainer for \"d98b149928a0d102e42032c47204fa0e3334f136d3a21cb52d6aa71791bcc4b6\" returns successfully" May 27 03:25:56.564402 containerd[1990]: time="2025-05-27T03:25:56.564280563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:25:56.565784 kubelet[3234]: E0527 03:25:56.565702 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-787bbfb675-hlbrh" podUID="e8dfedbc-afe6-4b30-b214-b9a2e156015d" May 27 03:25:56.755861 containerd[1990]: time="2025-05-27T03:25:56.755712699Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:25:56.757902 containerd[1990]: time="2025-05-27T03:25:56.757782476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:25:56.757902 containerd[1990]: time="2025-05-27T03:25:56.757872723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:25:56.758553 kubelet[3234]: E0527 03:25:56.758197 3234 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:56.758553 kubelet[3234]: E0527 03:25:56.758334 3234 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:25:56.758553 kubelet[3234]: E0527 03:25:56.758468 3234 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt7fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-ghcj4_calico-system(2b70e4e3-90ce-46ab-b026-f5df1c2548db): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:25:56.759694 kubelet[3234]: E0527 03:25:56.759649 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-ghcj4" podUID="2b70e4e3-90ce-46ab-b026-f5df1c2548db" May 27 03:25:58.065511 containerd[1990]: time="2025-05-27T03:25:58.065435442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce0ac8d50e27f6931f2acf9339ec90c7f96d4ebbd2e45a9285600bd6b67079f9\" id:\"3e7dc219aca86a458b554edd670230dfbe8428f3f88b719582998d42e61e9bea\" pid:6398 exit_status:1 exited_at:{seconds:1748316358 nanos:65111552}" May 27 03:25:58.360079 kubelet[3234]: E0527 03:25:58.360025 3234 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-64?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 27 03:26:02.243852 systemd[1]: cri-containerd-78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8.scope: Deactivated successfully. May 27 03:26:02.244744 systemd[1]: cri-containerd-78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8.scope: Consumed 271ms CPU time, 68.2M memory peak, 34.7M read from disk. May 27 03:26:02.250094 containerd[1990]: time="2025-05-27T03:26:02.250041059Z" level=info msg="received exit event container_id:\"78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8\" id:\"78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8\" pid:6316 exit_status:1 exited_at:{seconds:1748316362 nanos:249441838}" May 27 03:26:02.250618 containerd[1990]: time="2025-05-27T03:26:02.250156960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8\" id:\"78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8\" pid:6316 exit_status:1 exited_at:{seconds:1748316362 nanos:249441838}" May 27 03:26:02.301395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8-rootfs.mount: Deactivated successfully. May 27 03:26:02.586114 kubelet[3234]: I0527 03:26:02.585470 3234 scope.go:117] "RemoveContainer" containerID="917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200" May 27 03:26:02.586114 kubelet[3234]: I0527 03:26:02.585880 3234 scope.go:117] "RemoveContainer" containerID="78315fbbc4f9adb6848b7fb8ae4a16acff183fce388b7606279aed3dad0c26f8" May 27 03:26:02.586114 kubelet[3234]: E0527 03:26:02.586042 3234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-844669ff44-t8tcg_tigera-operator(36d06580-ea5a-49fe-b671-5f5935cc1bc0)\"" pod="tigera-operator/tigera-operator-844669ff44-t8tcg" podUID="36d06580-ea5a-49fe-b671-5f5935cc1bc0" May 27 03:26:02.704801 containerd[1990]: time="2025-05-27T03:26:02.704755503Z" level=info msg="RemoveContainer for \"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\"" May 27 03:26:02.719130 containerd[1990]: time="2025-05-27T03:26:02.718952663Z" level=info msg="RemoveContainer for \"917dd654ebac61e44a33b78cc44ca6a247e82675be6c0206b3cbacec780aa200\" returns successfully" May 27 03:26:08.365178 kubelet[3234]: E0527 03:26:08.364989 3234 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-64?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"